Test Report: Docker_Linux_crio_arm64 21796

                    
                      dade2a2e0f7c4c88a0aa5c1a92ad2c1084f27e44:2025-10-25:42053
                    
                

Test fail (36/326)

Order failed test Duration
29 TestAddons/serial/Volcano 0.29
35 TestAddons/parallel/Registry 14.03
36 TestAddons/parallel/RegistryCreds 0.47
37 TestAddons/parallel/Ingress 144.89
38 TestAddons/parallel/InspektorGadget 6.26
39 TestAddons/parallel/MetricsServer 5.37
41 TestAddons/parallel/CSI 45.55
42 TestAddons/parallel/Headlamp 3.54
43 TestAddons/parallel/CloudSpanner 5.35
44 TestAddons/parallel/LocalPath 8.69
45 TestAddons/parallel/NvidiaDevicePlugin 5.28
46 TestAddons/parallel/Yakd 6.27
97 TestFunctional/parallel/ServiceCmdConnect 603.87
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.14
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.11
127 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.43
131 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.43
133 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.23
136 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.53
146 TestFunctional/parallel/ServiceCmd/DeployApp 600.9
152 TestFunctional/parallel/ServiceCmd/HTTPS 0.61
153 TestFunctional/parallel/ServiceCmd/Format 0.59
154 TestFunctional/parallel/ServiceCmd/URL 0.48
190 TestJSONOutput/pause/Command 2.65
196 TestJSONOutput/unpause/Command 1.86
280 TestPause/serial/Pause 7.21
295 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.54
302 TestStartStop/group/old-k8s-version/serial/Pause 6.86
308 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.58
313 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 3.24
320 TestStartStop/group/no-preload/serial/Pause 6.13
326 TestStartStop/group/embed-certs/serial/Pause 7.93
330 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.46
338 TestStartStop/group/newest-cni/serial/Pause 7.1
339 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 3.23
347 TestStartStop/group/default-k8s-diff-port/serial/Pause 6.98
x
+
TestAddons/serial/Volcano (0.29s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-468341 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-468341 addons disable volcano --alsologtostderr -v=1: exit status 11 (292.543824ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 08:32:56.405653   11038 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:32:56.406991   11038 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:32:56.407010   11038 out.go:374] Setting ErrFile to fd 2...
	I1025 08:32:56.407017   11038 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:32:56.407314   11038 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-2312/.minikube/bin
	I1025 08:32:56.407633   11038 mustload.go:65] Loading cluster: addons-468341
	I1025 08:32:56.408068   11038 config.go:182] Loaded profile config "addons-468341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:32:56.408088   11038 addons.go:606] checking whether the cluster is paused
	I1025 08:32:56.408231   11038 config.go:182] Loaded profile config "addons-468341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:32:56.408249   11038 host.go:66] Checking if "addons-468341" exists ...
	I1025 08:32:56.408728   11038 cli_runner.go:164] Run: docker container inspect addons-468341 --format={{.State.Status}}
	I1025 08:32:56.442757   11038 ssh_runner.go:195] Run: systemctl --version
	I1025 08:32:56.442819   11038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:32:56.459914   11038 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/addons-468341/id_rsa Username:docker}
	I1025 08:32:56.564737   11038 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 08:32:56.564880   11038 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 08:32:56.598445   11038 cri.go:89] found id: "5fd1e4aa2eaec9a625e731b2310d7e53df6715992458f7b34f24c564a0371c93"
	I1025 08:32:56.598472   11038 cri.go:89] found id: "5f5e2ff55f9b9d96f5647e7d06d29ba2d01ac53c5f3bebb14c41ac3b15d46dbe"
	I1025 08:32:56.598477   11038 cri.go:89] found id: "fde287f23459127cc0438aff73855d690ce348105c0984d314227ebb893c0f27"
	I1025 08:32:56.598481   11038 cri.go:89] found id: "1c4a84678a48fe97a0d5e107fffed343bbdb5797c60914bfc1787ef22ec8aab6"
	I1025 08:32:56.598484   11038 cri.go:89] found id: "3976f771a2e1f4333179d9b66178f2b8d0e5df890f3e3934fa6b45cecc30471a"
	I1025 08:32:56.598488   11038 cri.go:89] found id: "c675c035a6dbbaf595ca82e7e5494c65d80846fa4403b093199b14027e8a1667"
	I1025 08:32:56.598491   11038 cri.go:89] found id: "f4608f1e203354a663895813be5940b353d5999288ba102363d0fc2ab5bee267"
	I1025 08:32:56.598494   11038 cri.go:89] found id: "2cb17a3d4c7c6272eb0c5b9a7105721c94d5c62089aae769495e41fd49a9130b"
	I1025 08:32:56.598497   11038 cri.go:89] found id: "9dfa9f05089922f4312c61cbba94162087a4b0ccb02343ec9524bd69b39eec86"
	I1025 08:32:56.598504   11038 cri.go:89] found id: "149fe53d8f125d10a2d1e5e32c97a97f55639e392cf1a6745398ed8ac79e7d66"
	I1025 08:32:56.598512   11038 cri.go:89] found id: "c0dd415aff39e64402a3d64867549a7a3b05dfcf210f7e1f94409f91384fa997"
	I1025 08:32:56.598516   11038 cri.go:89] found id: "3f2799b9c1b41eaec314d09a1269f3e858b9982860af8c4168ed54a9280b011d"
	I1025 08:32:56.598520   11038 cri.go:89] found id: "d658e0bd52e0d32446762f4e3466c7f28b2e31c7a876c4caaddd5444b238f373"
	I1025 08:32:56.598524   11038 cri.go:89] found id: "6abccf54d473ff21bf8b7e7a67510c44ab418b13e2ded840ecd714d35bc33050"
	I1025 08:32:56.598527   11038 cri.go:89] found id: "710588f58555a870efb846b35789c67ba50b0c47c5b86a2c12d2ebe8b7d3cf14"
	I1025 08:32:56.598536   11038 cri.go:89] found id: "990bc617d7987b2c3713ad5fcb4652f9f7697f9f8286262c1d0fbd61f03cb291"
	I1025 08:32:56.598551   11038 cri.go:89] found id: "3efcb5f51b3c46b7526ad1ba08ebb8af07f5ac32cc61ce8aa211066d05af5549"
	I1025 08:32:56.598556   11038 cri.go:89] found id: "6375e783d80c1ab19c196dea15de187c31ecf6231a480e3b354abf63f9986443"
	I1025 08:32:56.598559   11038 cri.go:89] found id: "11da55b7006a0c7ee4fdfa87ea50ea79c464ed85828bc75ef11dbecc5857aaa2"
	I1025 08:32:56.598562   11038 cri.go:89] found id: "a8a4b543d2547f083b802e2c98710bcc2f07776dfc49c8c41bb357abf197d215"
	I1025 08:32:56.598567   11038 cri.go:89] found id: "f40b4040bdb0ca8f6ba539c3a1cc4570ad8fc2d470fe2a3ee7023680adca8179"
	I1025 08:32:56.598570   11038 cri.go:89] found id: "ca4b0c8b5bb6a41514380e8fbc131cfd3c0036a7467eef4ccf7772196d699ebd"
	I1025 08:32:56.598573   11038 cri.go:89] found id: "2105d8a4af178f4195391e5b2d1ca656712cbf36e77394a9382e7c7a4b280b3c"
	I1025 08:32:56.598577   11038 cri.go:89] found id: ""
	I1025 08:32:56.598627   11038 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 08:32:56.613610   11038 out.go:203] 
	W1025 08:32:56.616475   11038 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:32:56Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:32:56Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 08:32:56.616501   11038 out.go:285] * 
	* 
	W1025 08:32:56.620328   11038 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 08:32:56.623278   11038 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-468341 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.29s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.03s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 6.153568ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-bl9lz" [1d570e3f-1a7f-47f7-9a56-92f7a27efe03] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003017461s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-xjrqf" [8a4de784-ff9c-48be-a85b-956955a98f06] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003998593s
addons_test.go:392: (dbg) Run:  kubectl --context addons-468341 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-468341 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-468341 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.485949121s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-468341 ip
2025/10/25 08:33:20 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-468341 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-468341 addons disable registry --alsologtostderr -v=1: exit status 11 (269.570722ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 08:33:20.722720   11567 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:33:20.723086   11567 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:33:20.723103   11567 out.go:374] Setting ErrFile to fd 2...
	I1025 08:33:20.723108   11567 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:33:20.723387   11567 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-2312/.minikube/bin
	I1025 08:33:20.723662   11567 mustload.go:65] Loading cluster: addons-468341
	I1025 08:33:20.724069   11567 config.go:182] Loaded profile config "addons-468341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:33:20.724088   11567 addons.go:606] checking whether the cluster is paused
	I1025 08:33:20.724224   11567 config.go:182] Loaded profile config "addons-468341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:33:20.724242   11567 host.go:66] Checking if "addons-468341" exists ...
	I1025 08:33:20.724757   11567 cli_runner.go:164] Run: docker container inspect addons-468341 --format={{.State.Status}}
	I1025 08:33:20.742487   11567 ssh_runner.go:195] Run: systemctl --version
	I1025 08:33:20.742545   11567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:33:20.760020   11567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/addons-468341/id_rsa Username:docker}
	I1025 08:33:20.868826   11567 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 08:33:20.868929   11567 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 08:33:20.913235   11567 cri.go:89] found id: "5fd1e4aa2eaec9a625e731b2310d7e53df6715992458f7b34f24c564a0371c93"
	I1025 08:33:20.913258   11567 cri.go:89] found id: "5f5e2ff55f9b9d96f5647e7d06d29ba2d01ac53c5f3bebb14c41ac3b15d46dbe"
	I1025 08:33:20.913268   11567 cri.go:89] found id: "fde287f23459127cc0438aff73855d690ce348105c0984d314227ebb893c0f27"
	I1025 08:33:20.913272   11567 cri.go:89] found id: "1c4a84678a48fe97a0d5e107fffed343bbdb5797c60914bfc1787ef22ec8aab6"
	I1025 08:33:20.913276   11567 cri.go:89] found id: "3976f771a2e1f4333179d9b66178f2b8d0e5df890f3e3934fa6b45cecc30471a"
	I1025 08:33:20.913279   11567 cri.go:89] found id: "c675c035a6dbbaf595ca82e7e5494c65d80846fa4403b093199b14027e8a1667"
	I1025 08:33:20.913290   11567 cri.go:89] found id: "f4608f1e203354a663895813be5940b353d5999288ba102363d0fc2ab5bee267"
	I1025 08:33:20.913293   11567 cri.go:89] found id: "2cb17a3d4c7c6272eb0c5b9a7105721c94d5c62089aae769495e41fd49a9130b"
	I1025 08:33:20.913297   11567 cri.go:89] found id: "9dfa9f05089922f4312c61cbba94162087a4b0ccb02343ec9524bd69b39eec86"
	I1025 08:33:20.913305   11567 cri.go:89] found id: "149fe53d8f125d10a2d1e5e32c97a97f55639e392cf1a6745398ed8ac79e7d66"
	I1025 08:33:20.913309   11567 cri.go:89] found id: "c0dd415aff39e64402a3d64867549a7a3b05dfcf210f7e1f94409f91384fa997"
	I1025 08:33:20.913312   11567 cri.go:89] found id: "3f2799b9c1b41eaec314d09a1269f3e858b9982860af8c4168ed54a9280b011d"
	I1025 08:33:20.913315   11567 cri.go:89] found id: "d658e0bd52e0d32446762f4e3466c7f28b2e31c7a876c4caaddd5444b238f373"
	I1025 08:33:20.913318   11567 cri.go:89] found id: "6abccf54d473ff21bf8b7e7a67510c44ab418b13e2ded840ecd714d35bc33050"
	I1025 08:33:20.913321   11567 cri.go:89] found id: "710588f58555a870efb846b35789c67ba50b0c47c5b86a2c12d2ebe8b7d3cf14"
	I1025 08:33:20.913336   11567 cri.go:89] found id: "990bc617d7987b2c3713ad5fcb4652f9f7697f9f8286262c1d0fbd61f03cb291"
	I1025 08:33:20.913341   11567 cri.go:89] found id: "3efcb5f51b3c46b7526ad1ba08ebb8af07f5ac32cc61ce8aa211066d05af5549"
	I1025 08:33:20.913346   11567 cri.go:89] found id: "6375e783d80c1ab19c196dea15de187c31ecf6231a480e3b354abf63f9986443"
	I1025 08:33:20.913353   11567 cri.go:89] found id: "11da55b7006a0c7ee4fdfa87ea50ea79c464ed85828bc75ef11dbecc5857aaa2"
	I1025 08:33:20.913357   11567 cri.go:89] found id: "a8a4b543d2547f083b802e2c98710bcc2f07776dfc49c8c41bb357abf197d215"
	I1025 08:33:20.913361   11567 cri.go:89] found id: "f40b4040bdb0ca8f6ba539c3a1cc4570ad8fc2d470fe2a3ee7023680adca8179"
	I1025 08:33:20.913364   11567 cri.go:89] found id: "ca4b0c8b5bb6a41514380e8fbc131cfd3c0036a7467eef4ccf7772196d699ebd"
	I1025 08:33:20.913366   11567 cri.go:89] found id: "2105d8a4af178f4195391e5b2d1ca656712cbf36e77394a9382e7c7a4b280b3c"
	I1025 08:33:20.913369   11567 cri.go:89] found id: ""
	I1025 08:33:20.913420   11567 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 08:33:20.932705   11567 out.go:203] 
	W1025 08:33:20.935650   11567 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:33:20Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:33:20Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 08:33:20.935678   11567 out.go:285] * 
	* 
	W1025 08:33:20.939454   11567 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 08:33:20.942362   11567 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-468341 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (14.03s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.47s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 5.518782ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-468341
addons_test.go:332: (dbg) Run:  kubectl --context addons-468341 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-468341 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-468341 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (261.971037ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 08:34:12.959694   13601 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:34:12.959903   13601 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:34:12.959939   13601 out.go:374] Setting ErrFile to fd 2...
	I1025 08:34:12.959962   13601 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:34:12.960241   13601 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-2312/.minikube/bin
	I1025 08:34:12.960564   13601 mustload.go:65] Loading cluster: addons-468341
	I1025 08:34:12.960981   13601 config.go:182] Loaded profile config "addons-468341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:34:12.961028   13601 addons.go:606] checking whether the cluster is paused
	I1025 08:34:12.961163   13601 config.go:182] Loaded profile config "addons-468341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:34:12.961200   13601 host.go:66] Checking if "addons-468341" exists ...
	I1025 08:34:12.961688   13601 cli_runner.go:164] Run: docker container inspect addons-468341 --format={{.State.Status}}
	I1025 08:34:12.984470   13601 ssh_runner.go:195] Run: systemctl --version
	I1025 08:34:12.984536   13601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:34:13.005927   13601 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/addons-468341/id_rsa Username:docker}
	I1025 08:34:13.116525   13601 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 08:34:13.116623   13601 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 08:34:13.147807   13601 cri.go:89] found id: "5fd1e4aa2eaec9a625e731b2310d7e53df6715992458f7b34f24c564a0371c93"
	I1025 08:34:13.147883   13601 cri.go:89] found id: "5f5e2ff55f9b9d96f5647e7d06d29ba2d01ac53c5f3bebb14c41ac3b15d46dbe"
	I1025 08:34:13.147894   13601 cri.go:89] found id: "fde287f23459127cc0438aff73855d690ce348105c0984d314227ebb893c0f27"
	I1025 08:34:13.147899   13601 cri.go:89] found id: "1c4a84678a48fe97a0d5e107fffed343bbdb5797c60914bfc1787ef22ec8aab6"
	I1025 08:34:13.147902   13601 cri.go:89] found id: "3976f771a2e1f4333179d9b66178f2b8d0e5df890f3e3934fa6b45cecc30471a"
	I1025 08:34:13.147906   13601 cri.go:89] found id: "c675c035a6dbbaf595ca82e7e5494c65d80846fa4403b093199b14027e8a1667"
	I1025 08:34:13.147909   13601 cri.go:89] found id: "f4608f1e203354a663895813be5940b353d5999288ba102363d0fc2ab5bee267"
	I1025 08:34:13.147913   13601 cri.go:89] found id: "2cb17a3d4c7c6272eb0c5b9a7105721c94d5c62089aae769495e41fd49a9130b"
	I1025 08:34:13.147916   13601 cri.go:89] found id: "9dfa9f05089922f4312c61cbba94162087a4b0ccb02343ec9524bd69b39eec86"
	I1025 08:34:13.147922   13601 cri.go:89] found id: "149fe53d8f125d10a2d1e5e32c97a97f55639e392cf1a6745398ed8ac79e7d66"
	I1025 08:34:13.147925   13601 cri.go:89] found id: "c0dd415aff39e64402a3d64867549a7a3b05dfcf210f7e1f94409f91384fa997"
	I1025 08:34:13.147928   13601 cri.go:89] found id: "3f2799b9c1b41eaec314d09a1269f3e858b9982860af8c4168ed54a9280b011d"
	I1025 08:34:13.147932   13601 cri.go:89] found id: "d658e0bd52e0d32446762f4e3466c7f28b2e31c7a876c4caaddd5444b238f373"
	I1025 08:34:13.147935   13601 cri.go:89] found id: "6abccf54d473ff21bf8b7e7a67510c44ab418b13e2ded840ecd714d35bc33050"
	I1025 08:34:13.147938   13601 cri.go:89] found id: "710588f58555a870efb846b35789c67ba50b0c47c5b86a2c12d2ebe8b7d3cf14"
	I1025 08:34:13.147943   13601 cri.go:89] found id: "990bc617d7987b2c3713ad5fcb4652f9f7697f9f8286262c1d0fbd61f03cb291"
	I1025 08:34:13.147949   13601 cri.go:89] found id: "3efcb5f51b3c46b7526ad1ba08ebb8af07f5ac32cc61ce8aa211066d05af5549"
	I1025 08:34:13.147953   13601 cri.go:89] found id: "6375e783d80c1ab19c196dea15de187c31ecf6231a480e3b354abf63f9986443"
	I1025 08:34:13.147956   13601 cri.go:89] found id: "11da55b7006a0c7ee4fdfa87ea50ea79c464ed85828bc75ef11dbecc5857aaa2"
	I1025 08:34:13.147959   13601 cri.go:89] found id: "a8a4b543d2547f083b802e2c98710bcc2f07776dfc49c8c41bb357abf197d215"
	I1025 08:34:13.147963   13601 cri.go:89] found id: "f40b4040bdb0ca8f6ba539c3a1cc4570ad8fc2d470fe2a3ee7023680adca8179"
	I1025 08:34:13.147969   13601 cri.go:89] found id: "ca4b0c8b5bb6a41514380e8fbc131cfd3c0036a7467eef4ccf7772196d699ebd"
	I1025 08:34:13.147972   13601 cri.go:89] found id: "2105d8a4af178f4195391e5b2d1ca656712cbf36e77394a9382e7c7a4b280b3c"
	I1025 08:34:13.147975   13601 cri.go:89] found id: ""
	I1025 08:34:13.148035   13601 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 08:34:13.163051   13601 out.go:203] 
	W1025 08:34:13.165914   13601 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:34:13Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:34:13Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 08:34:13.165938   13601 out.go:285] * 
	* 
	W1025 08:34:13.169824   13601 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 08:34:13.172681   13601 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-468341 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.47s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (144.89s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-468341 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-468341 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-468341 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [abca2d14-8b84-40df-878c-a12f9827ac24] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [abca2d14-8b84-40df-878c-a12f9827ac24] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.00445158s
I1025 08:33:51.079333    4110 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-468341 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-468341 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.095839468s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-468341 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-468341 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-468341
helpers_test.go:243: (dbg) docker inspect addons-468341:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "921bcbb16e37f18586dfcad891fd4e6c424ad8acf17dde69218b365d67adf111",
	        "Created": "2025-10-25T08:30:22.932850145Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 5281,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T08:30:23.005048345Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/921bcbb16e37f18586dfcad891fd4e6c424ad8acf17dde69218b365d67adf111/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/921bcbb16e37f18586dfcad891fd4e6c424ad8acf17dde69218b365d67adf111/hostname",
	        "HostsPath": "/var/lib/docker/containers/921bcbb16e37f18586dfcad891fd4e6c424ad8acf17dde69218b365d67adf111/hosts",
	        "LogPath": "/var/lib/docker/containers/921bcbb16e37f18586dfcad891fd4e6c424ad8acf17dde69218b365d67adf111/921bcbb16e37f18586dfcad891fd4e6c424ad8acf17dde69218b365d67adf111-json.log",
	        "Name": "/addons-468341",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-468341:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-468341",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "921bcbb16e37f18586dfcad891fd4e6c424ad8acf17dde69218b365d67adf111",
	                "LowerDir": "/var/lib/docker/overlay2/658dff37d510687d7ea850578e6efc1df446bb050fd0131ea19f38935eea4f9e-init/diff:/var/lib/docker/overlay2/cef79fc16ca3e257f9c9390d34fae091a7f7fa219913b2606caf1922ea50ed93/diff",
	                "MergedDir": "/var/lib/docker/overlay2/658dff37d510687d7ea850578e6efc1df446bb050fd0131ea19f38935eea4f9e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/658dff37d510687d7ea850578e6efc1df446bb050fd0131ea19f38935eea4f9e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/658dff37d510687d7ea850578e6efc1df446bb050fd0131ea19f38935eea4f9e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-468341",
	                "Source": "/var/lib/docker/volumes/addons-468341/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-468341",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-468341",
	                "name.minikube.sigs.k8s.io": "addons-468341",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "953f4c687d4c2f5a2e91e34ee118fa4aa98ea2440602b2c4cba8d007779b5b17",
	            "SandboxKey": "/var/run/docker/netns/953f4c687d4c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-468341": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3e:32:ff:26:0c:5d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "422586f035103bb55aaa9f0e31f2b43fa36e4fbc27c8f4a97382f7de2b9ed97e",
	                    "EndpointID": "cfb1c4d45c6fef1387335910a8ca12188b4265998255f0ff9cd8603646ab1513",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-468341",
	                        "921bcbb16e37"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-468341 -n addons-468341
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-468341 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-468341 logs -n 25: (1.560073527s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-812739                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-812739 │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │ 25 Oct 25 08:29 UTC │
	│ start   │ --download-only -p binary-mirror-223268 --alsologtostderr --binary-mirror http://127.0.0.1:41137 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-223268   │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │                     │
	│ delete  │ -p binary-mirror-223268                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-223268   │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │ 25 Oct 25 08:29 UTC │
	│ addons  │ enable dashboard -p addons-468341                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-468341          │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │                     │
	│ addons  │ disable dashboard -p addons-468341                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-468341          │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │                     │
	│ start   │ -p addons-468341 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-468341          │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │ 25 Oct 25 08:32 UTC │
	│ addons  │ addons-468341 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-468341          │ jenkins │ v1.37.0 │ 25 Oct 25 08:32 UTC │                     │
	│ addons  │ addons-468341 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-468341          │ jenkins │ v1.37.0 │ 25 Oct 25 08:33 UTC │                     │
	│ addons  │ addons-468341 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-468341          │ jenkins │ v1.37.0 │ 25 Oct 25 08:33 UTC │                     │
	│ addons  │ addons-468341 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-468341          │ jenkins │ v1.37.0 │ 25 Oct 25 08:33 UTC │                     │
	│ ip      │ addons-468341 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-468341          │ jenkins │ v1.37.0 │ 25 Oct 25 08:33 UTC │ 25 Oct 25 08:33 UTC │
	│ addons  │ addons-468341 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-468341          │ jenkins │ v1.37.0 │ 25 Oct 25 08:33 UTC │                     │
	│ addons  │ addons-468341 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-468341          │ jenkins │ v1.37.0 │ 25 Oct 25 08:33 UTC │                     │
	│ ssh     │ addons-468341 ssh cat /opt/local-path-provisioner/pvc-e010f192-5941-4327-9df8-ac1fe331714f_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-468341          │ jenkins │ v1.37.0 │ 25 Oct 25 08:33 UTC │ 25 Oct 25 08:33 UTC │
	│ addons  │ enable headlamp -p addons-468341 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-468341          │ jenkins │ v1.37.0 │ 25 Oct 25 08:33 UTC │                     │
	│ addons  │ addons-468341 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-468341          │ jenkins │ v1.37.0 │ 25 Oct 25 08:33 UTC │                     │
	│ addons  │ addons-468341 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-468341          │ jenkins │ v1.37.0 │ 25 Oct 25 08:33 UTC │                     │
	│ addons  │ addons-468341 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-468341          │ jenkins │ v1.37.0 │ 25 Oct 25 08:33 UTC │                     │
	│ addons  │ addons-468341 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-468341          │ jenkins │ v1.37.0 │ 25 Oct 25 08:33 UTC │                     │
	│ ssh     │ addons-468341 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-468341          │ jenkins │ v1.37.0 │ 25 Oct 25 08:33 UTC │                     │
	│ addons  │ addons-468341 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-468341          │ jenkins │ v1.37.0 │ 25 Oct 25 08:34 UTC │                     │
	│ addons  │ addons-468341 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-468341          │ jenkins │ v1.37.0 │ 25 Oct 25 08:34 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-468341                                                                                                                                                                                                                                                                                                                                                                                           │ addons-468341          │ jenkins │ v1.37.0 │ 25 Oct 25 08:34 UTC │ 25 Oct 25 08:34 UTC │
	│ addons  │ addons-468341 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-468341          │ jenkins │ v1.37.0 │ 25 Oct 25 08:34 UTC │                     │
	│ ip      │ addons-468341 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-468341          │ jenkins │ v1.37.0 │ 25 Oct 25 08:36 UTC │ 25 Oct 25 08:36 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 08:29:55
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 08:29:55.091280    4872 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:29:55.091482    4872 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:29:55.091493    4872 out.go:374] Setting ErrFile to fd 2...
	I1025 08:29:55.091498    4872 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:29:55.091795    4872 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-2312/.minikube/bin
	I1025 08:29:55.092294    4872 out.go:368] Setting JSON to false
	I1025 08:29:55.093073    4872 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":746,"bootTime":1761380249,"procs":143,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1025 08:29:55.093145    4872 start.go:141] virtualization:  
	I1025 08:29:55.114103    4872 out.go:179] * [addons-468341] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 08:29:55.144478    4872 out.go:179]   - MINIKUBE_LOCATION=21796
	I1025 08:29:55.144585    4872 notify.go:220] Checking for updates...
	I1025 08:29:55.203792    4872 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 08:29:55.219806    4872 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21796-2312/kubeconfig
	I1025 08:29:55.242998    4872 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-2312/.minikube
	I1025 08:29:55.273176    4872 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 08:29:55.296812    4872 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 08:29:55.321978    4872 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 08:29:55.341948    4872 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 08:29:55.342109    4872 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 08:29:55.408178    4872 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-25 08:29:55.398857938 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 08:29:55.408290    4872 docker.go:318] overlay module found
	I1025 08:29:55.430026    4872 out.go:179] * Using the docker driver based on user configuration
	I1025 08:29:55.446624    4872 start.go:305] selected driver: docker
	I1025 08:29:55.446649    4872 start.go:925] validating driver "docker" against <nil>
	I1025 08:29:55.446662    4872 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 08:29:55.447377    4872 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 08:29:55.510286    4872 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-25 08:29:55.499127841 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 08:29:55.510449    4872 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 08:29:55.510696    4872 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 08:29:55.519365    4872 out.go:179] * Using Docker driver with root privileges
	I1025 08:29:55.526147    4872 cni.go:84] Creating CNI manager for ""
	I1025 08:29:55.526221    4872 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 08:29:55.526229    4872 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 08:29:55.526312    4872 start.go:349] cluster config:
	{Name:addons-468341 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-468341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1025 08:29:55.536374    4872 out.go:179] * Starting "addons-468341" primary control-plane node in "addons-468341" cluster
	I1025 08:29:55.543836    4872 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 08:29:55.549422    4872 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 08:29:55.557089    4872 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 08:29:55.557147    4872 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21796-2312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1025 08:29:55.557157    4872 cache.go:58] Caching tarball of preloaded images
	I1025 08:29:55.557219    4872 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 08:29:55.557511    4872 preload.go:233] Found /home/jenkins/minikube-integration/21796-2312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 08:29:55.557527    4872 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 08:29:55.557868    4872 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/config.json ...
	I1025 08:29:55.557889    4872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/config.json: {Name:mka19d1f2dad675e22268b31a3d755a4a49d3897 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:29:55.574959    4872 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1025 08:29:55.575093    4872 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1025 08:29:55.575125    4872 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1025 08:29:55.575135    4872 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1025 08:29:55.575149    4872 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1025 08:29:55.575155    4872 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from local cache
	I1025 08:30:14.081320    4872 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from cached tarball
	I1025 08:30:14.081353    4872 cache.go:232] Successfully downloaded all kic artifacts
	I1025 08:30:14.081382    4872 start.go:360] acquireMachinesLock for addons-468341: {Name:mkc686fd048fc7820c5fe7ce0d23697ebcad8b28 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 08:30:14.081499    4872 start.go:364] duration metric: took 99.438µs to acquireMachinesLock for "addons-468341"
	I1025 08:30:14.081524    4872 start.go:93] Provisioning new machine with config: &{Name:addons-468341 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-468341 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 08:30:14.081613    4872 start.go:125] createHost starting for "" (driver="docker")
	I1025 08:30:14.085124    4872 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1025 08:30:14.085365    4872 start.go:159] libmachine.API.Create for "addons-468341" (driver="docker")
	I1025 08:30:14.085414    4872 client.go:168] LocalClient.Create starting
	I1025 08:30:14.085547    4872 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem
	I1025 08:30:14.584443    4872 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem
	I1025 08:30:15.775099    4872 cli_runner.go:164] Run: docker network inspect addons-468341 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 08:30:15.791782    4872 cli_runner.go:211] docker network inspect addons-468341 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 08:30:15.791882    4872 network_create.go:284] running [docker network inspect addons-468341] to gather additional debugging logs...
	I1025 08:30:15.791905    4872 cli_runner.go:164] Run: docker network inspect addons-468341
	W1025 08:30:15.807823    4872 cli_runner.go:211] docker network inspect addons-468341 returned with exit code 1
	I1025 08:30:15.807855    4872 network_create.go:287] error running [docker network inspect addons-468341]: docker network inspect addons-468341: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-468341 not found
	I1025 08:30:15.807870    4872 network_create.go:289] output of [docker network inspect addons-468341]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-468341 not found
	
	** /stderr **
	I1025 08:30:15.807981    4872 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 08:30:15.825480    4872 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001ac3490}
	I1025 08:30:15.825520    4872 network_create.go:124] attempt to create docker network addons-468341 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1025 08:30:15.825576    4872 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-468341 addons-468341
	I1025 08:30:15.890317    4872 network_create.go:108] docker network addons-468341 192.168.49.0/24 created
	I1025 08:30:15.890350    4872 kic.go:121] calculated static IP "192.168.49.2" for the "addons-468341" container
	I1025 08:30:15.890423    4872 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 08:30:15.906654    4872 cli_runner.go:164] Run: docker volume create addons-468341 --label name.minikube.sigs.k8s.io=addons-468341 --label created_by.minikube.sigs.k8s.io=true
	I1025 08:30:15.924069    4872 oci.go:103] Successfully created a docker volume addons-468341
	I1025 08:30:15.924153    4872 cli_runner.go:164] Run: docker run --rm --name addons-468341-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-468341 --entrypoint /usr/bin/test -v addons-468341:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1025 08:30:18.415356    4872 cli_runner.go:217] Completed: docker run --rm --name addons-468341-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-468341 --entrypoint /usr/bin/test -v addons-468341:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (2.491163077s)
	I1025 08:30:18.415399    4872 oci.go:107] Successfully prepared a docker volume addons-468341
	I1025 08:30:18.415428    4872 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 08:30:18.415447    4872 kic.go:194] Starting extracting preloaded images to volume ...
	I1025 08:30:18.415522    4872 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21796-2312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-468341:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1025 08:30:22.860306    4872 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21796-2312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-468341:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.444745148s)
	I1025 08:30:22.860341    4872 kic.go:203] duration metric: took 4.444890249s to extract preloaded images to volume ...
	W1025 08:30:22.860496    4872 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1025 08:30:22.860611    4872 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 08:30:22.917078    4872 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-468341 --name addons-468341 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-468341 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-468341 --network addons-468341 --ip 192.168.49.2 --volume addons-468341:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1025 08:30:23.242169    4872 cli_runner.go:164] Run: docker container inspect addons-468341 --format={{.State.Running}}
	I1025 08:30:23.269927    4872 cli_runner.go:164] Run: docker container inspect addons-468341 --format={{.State.Status}}
	I1025 08:30:23.291382    4872 cli_runner.go:164] Run: docker exec addons-468341 stat /var/lib/dpkg/alternatives/iptables
	I1025 08:30:23.344104    4872 oci.go:144] the created container "addons-468341" has a running status.
	I1025 08:30:23.344130    4872 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21796-2312/.minikube/machines/addons-468341/id_rsa...
	I1025 08:30:23.401331    4872 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21796-2312/.minikube/machines/addons-468341/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 08:30:23.421304    4872 cli_runner.go:164] Run: docker container inspect addons-468341 --format={{.State.Status}}
	I1025 08:30:23.440122    4872 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 08:30:23.440145    4872 kic_runner.go:114] Args: [docker exec --privileged addons-468341 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 08:30:23.493304    4872 cli_runner.go:164] Run: docker container inspect addons-468341 --format={{.State.Status}}
	I1025 08:30:23.517428    4872 machine.go:93] provisionDockerMachine start ...
	I1025 08:30:23.517537    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:30:23.542649    4872 main.go:141] libmachine: Using SSH client type: native
	I1025 08:30:23.542991    4872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1025 08:30:23.543014    4872 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 08:30:23.543781    4872 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1025 08:30:26.697631    4872 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-468341
	
	I1025 08:30:26.697654    4872 ubuntu.go:182] provisioning hostname "addons-468341"
	I1025 08:30:26.697746    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:30:26.716111    4872 main.go:141] libmachine: Using SSH client type: native
	I1025 08:30:26.716435    4872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1025 08:30:26.716452    4872 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-468341 && echo "addons-468341" | sudo tee /etc/hostname
	I1025 08:30:26.875142    4872 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-468341
	
	I1025 08:30:26.875260    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:30:26.893093    4872 main.go:141] libmachine: Using SSH client type: native
	I1025 08:30:26.893388    4872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1025 08:30:26.893409    4872 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-468341' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-468341/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-468341' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 08:30:27.040683    4872 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 08:30:27.040805    4872 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21796-2312/.minikube CaCertPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21796-2312/.minikube}
	I1025 08:30:27.040844    4872 ubuntu.go:190] setting up certificates
	I1025 08:30:27.040876    4872 provision.go:84] configureAuth start
	I1025 08:30:27.040951    4872 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-468341
	I1025 08:30:27.058326    4872 provision.go:143] copyHostCerts
	I1025 08:30:27.058405    4872 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21796-2312/.minikube/ca.pem (1082 bytes)
	I1025 08:30:27.058652    4872 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21796-2312/.minikube/cert.pem (1123 bytes)
	I1025 08:30:27.058735    4872 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21796-2312/.minikube/key.pem (1679 bytes)
	I1025 08:30:27.058819    4872 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21796-2312/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca-key.pem org=jenkins.addons-468341 san=[127.0.0.1 192.168.49.2 addons-468341 localhost minikube]
	I1025 08:30:27.521500    4872 provision.go:177] copyRemoteCerts
	I1025 08:30:27.521567    4872 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 08:30:27.521607    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:30:27.538674    4872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/addons-468341/id_rsa Username:docker}
	I1025 08:30:27.641479    4872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 08:30:27.658278    4872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1025 08:30:27.675721    4872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 08:30:27.692911    4872 provision.go:87] duration metric: took 652.021041ms to configureAuth
	I1025 08:30:27.692937    4872 ubuntu.go:206] setting minikube options for container-runtime
	I1025 08:30:27.693133    4872 config.go:182] Loaded profile config "addons-468341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:30:27.693247    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:30:27.710124    4872 main.go:141] libmachine: Using SSH client type: native
	I1025 08:30:27.710437    4872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1025 08:30:27.710460    4872 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 08:30:27.967624    4872 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 08:30:27.967697    4872 machine.go:96] duration metric: took 4.45024481s to provisionDockerMachine
	I1025 08:30:27.967721    4872 client.go:171] duration metric: took 13.882296562s to LocalClient.Create
	I1025 08:30:27.967774    4872 start.go:167] duration metric: took 13.882391643s to libmachine.API.Create "addons-468341"
	I1025 08:30:27.967800    4872 start.go:293] postStartSetup for "addons-468341" (driver="docker")
	I1025 08:30:27.967827    4872 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 08:30:27.967938    4872 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 08:30:27.968046    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:30:27.986079    4872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/addons-468341/id_rsa Username:docker}
	I1025 08:30:28.098567    4872 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 08:30:28.102127    4872 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 08:30:28.102156    4872 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 08:30:28.102168    4872 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-2312/.minikube/addons for local assets ...
	I1025 08:30:28.102238    4872 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-2312/.minikube/files for local assets ...
	I1025 08:30:28.102267    4872 start.go:296] duration metric: took 134.446525ms for postStartSetup
	I1025 08:30:28.102619    4872 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-468341
	I1025 08:30:28.119663    4872 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/config.json ...
	I1025 08:30:28.119960    4872 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 08:30:28.120009    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:30:28.139624    4872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/addons-468341/id_rsa Username:docker}
	I1025 08:30:28.239228    4872 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 08:30:28.244379    4872 start.go:128] duration metric: took 14.162752491s to createHost
	I1025 08:30:28.244407    4872 start.go:83] releasing machines lock for "addons-468341", held for 14.162898666s
	I1025 08:30:28.244480    4872 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-468341
	I1025 08:30:28.261779    4872 ssh_runner.go:195] Run: cat /version.json
	I1025 08:30:28.261839    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:30:28.262184    4872 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 08:30:28.262262    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:30:28.286244    4872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/addons-468341/id_rsa Username:docker}
	I1025 08:30:28.299617    4872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/addons-468341/id_rsa Username:docker}
	I1025 08:30:28.389715    4872 ssh_runner.go:195] Run: systemctl --version
	I1025 08:30:28.483586    4872 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 08:30:28.517790    4872 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 08:30:28.521882    4872 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 08:30:28.522020    4872 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 08:30:28.551384    4872 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1025 08:30:28.551447    4872 start.go:495] detecting cgroup driver to use...
	I1025 08:30:28.551494    4872 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 08:30:28.551549    4872 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 08:30:28.567921    4872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 08:30:28.580432    4872 docker.go:218] disabling cri-docker service (if available) ...
	I1025 08:30:28.580550    4872 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 08:30:28.598177    4872 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 08:30:28.617273    4872 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 08:30:28.747479    4872 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 08:30:28.887848    4872 docker.go:234] disabling docker service ...
	I1025 08:30:28.887975    4872 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 08:30:28.908816    4872 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 08:30:28.922073    4872 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 08:30:29.046572    4872 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 08:30:29.172480    4872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 08:30:29.186444    4872 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 08:30:29.200002    4872 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 08:30:29.200067    4872 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 08:30:29.208662    4872 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 08:30:29.208727    4872 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 08:30:29.217689    4872 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 08:30:29.226728    4872 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 08:30:29.235304    4872 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 08:30:29.243236    4872 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 08:30:29.251931    4872 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 08:30:29.265693    4872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 08:30:29.274388    4872 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 08:30:29.281637    4872 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1025 08:30:29.281744    4872 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1025 08:30:29.295552    4872 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 08:30:29.303070    4872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 08:30:29.426850    4872 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 08:30:29.549011    4872 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 08:30:29.549093    4872 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 08:30:29.552826    4872 start.go:563] Will wait 60s for crictl version
	I1025 08:30:29.552931    4872 ssh_runner.go:195] Run: which crictl
	I1025 08:30:29.556545    4872 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 08:30:29.580896    4872 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 08:30:29.581077    4872 ssh_runner.go:195] Run: crio --version
	I1025 08:30:29.610402    4872 ssh_runner.go:195] Run: crio --version
	I1025 08:30:29.640415    4872 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 08:30:29.643274    4872 cli_runner.go:164] Run: docker network inspect addons-468341 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 08:30:29.659548    4872 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1025 08:30:29.663136    4872 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 08:30:29.672910    4872 kubeadm.go:883] updating cluster {Name:addons-468341 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-468341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 08:30:29.673029    4872 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 08:30:29.673094    4872 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 08:30:29.711357    4872 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 08:30:29.711380    4872 crio.go:433] Images already preloaded, skipping extraction
	I1025 08:30:29.711437    4872 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 08:30:29.740644    4872 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 08:30:29.740667    4872 cache_images.go:85] Images are preloaded, skipping loading
	I1025 08:30:29.740675    4872 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1025 08:30:29.740761    4872 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-468341 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-468341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 08:30:29.740846    4872 ssh_runner.go:195] Run: crio config
	I1025 08:30:29.798927    4872 cni.go:84] Creating CNI manager for ""
	I1025 08:30:29.798950    4872 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 08:30:29.798971    4872 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 08:30:29.799018    4872 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-468341 NodeName:addons-468341 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 08:30:29.799182    4872 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-468341"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 08:30:29.799254    4872 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 08:30:29.806681    4872 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 08:30:29.806751    4872 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 08:30:29.814141    4872 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1025 08:30:29.826729    4872 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 08:30:29.839252    4872 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1025 08:30:29.851772    4872 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1025 08:30:29.855447    4872 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 08:30:29.865175    4872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 08:30:29.989812    4872 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 08:30:30.039910    4872 certs.go:69] Setting up /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341 for IP: 192.168.49.2
	I1025 08:30:30.039989    4872 certs.go:195] generating shared ca certs ...
	I1025 08:30:30.040022    4872 certs.go:227] acquiring lock for ca certs: {Name:mke81a91ad41248e67f7acba9fb2e71e1c110e7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:30.040216    4872 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21796-2312/.minikube/ca.key
	I1025 08:30:30.411157    4872 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-2312/.minikube/ca.crt ...
	I1025 08:30:30.411205    4872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/ca.crt: {Name:mk52523ff552b275190ee126a048106c7e302f21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:30.411443    4872 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-2312/.minikube/ca.key ...
	I1025 08:30:30.411460    4872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/ca.key: {Name:mkb1a534575f9d829c260998bd8a08f47ad14582 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:30.411558    4872 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.key
	I1025 08:30:30.632380    4872 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.crt ...
	I1025 08:30:30.632411    4872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.crt: {Name:mk73df502898df6e5dc6aa607bc2f5fd24d2e8be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:30.632585    4872 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.key ...
	I1025 08:30:30.632598    4872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.key: {Name:mk3e093eefd488f53bba9abe0f102f8a60ee7e3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:30.632676    4872 certs.go:257] generating profile certs ...
	I1025 08:30:30.632735    4872 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/client.key
	I1025 08:30:30.632753    4872 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/client.crt with IP's: []
	I1025 08:30:31.072729    4872 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/client.crt ...
	I1025 08:30:31.072761    4872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/client.crt: {Name:mked676dc37c5446e46be4bc45a0b4fcac476eaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:31.072954    4872 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/client.key ...
	I1025 08:30:31.072969    4872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/client.key: {Name:mkaed03af2cb2e73eb4ef8d47a01b9f81104a746 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:31.073053    4872 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/apiserver.key.ea2331ce
	I1025 08:30:31.073076    4872 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/apiserver.crt.ea2331ce with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1025 08:30:31.155314    4872 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/apiserver.crt.ea2331ce ...
	I1025 08:30:31.155344    4872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/apiserver.crt.ea2331ce: {Name:mka305d293c5f6355df49608d150e4ab12440176 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:31.155515    4872 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/apiserver.key.ea2331ce ...
	I1025 08:30:31.155530    4872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/apiserver.key.ea2331ce: {Name:mk94a466220e59b4d770674af0f2ae191f9db611 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:31.155614    4872 certs.go:382] copying /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/apiserver.crt.ea2331ce -> /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/apiserver.crt
	I1025 08:30:31.155698    4872 certs.go:386] copying /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/apiserver.key.ea2331ce -> /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/apiserver.key
	I1025 08:30:31.155755    4872 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/proxy-client.key
	I1025 08:30:31.155778    4872 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/proxy-client.crt with IP's: []
	I1025 08:30:32.017070    4872 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/proxy-client.crt ...
	I1025 08:30:32.017102    4872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/proxy-client.crt: {Name:mk80d83f4c97b8faf7d18eb92e95aa4f3b4e33e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:32.017320    4872 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/proxy-client.key ...
	I1025 08:30:32.017340    4872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/proxy-client.key: {Name:mk2b8bed11fe3b414721931dd6020503b901cc9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:32.017532    4872 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 08:30:32.017577    4872 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem (1082 bytes)
	I1025 08:30:32.017608    4872 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem (1123 bytes)
	I1025 08:30:32.017639    4872 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/key.pem (1679 bytes)
	I1025 08:30:32.018244    4872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 08:30:32.039539    4872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 08:30:32.058746    4872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 08:30:32.077322    4872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 08:30:32.095859    4872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1025 08:30:32.113758    4872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 08:30:32.132034    4872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 08:30:32.149544    4872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 08:30:32.166911    4872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 08:30:32.185409    4872 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 08:30:32.198610    4872 ssh_runner.go:195] Run: openssl version
	I1025 08:30:32.204669    4872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 08:30:32.213186    4872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 08:30:32.216819    4872 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1025 08:30:32.216882    4872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 08:30:32.257612    4872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 08:30:32.265715    4872 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 08:30:32.269012    4872 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 08:30:32.269066    4872 kubeadm.go:400] StartCluster: {Name:addons-468341 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-468341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 08:30:32.269141    4872 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 08:30:32.269211    4872 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 08:30:32.295187    4872 cri.go:89] found id: ""
	I1025 08:30:32.295267    4872 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 08:30:32.302833    4872 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 08:30:32.310377    4872 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1025 08:30:32.310452    4872 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 08:30:32.318252    4872 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 08:30:32.318282    4872 kubeadm.go:157] found existing configuration files:
	
	I1025 08:30:32.318368    4872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 08:30:32.325958    4872 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 08:30:32.326047    4872 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 08:30:32.333657    4872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 08:30:32.342062    4872 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 08:30:32.342131    4872 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 08:30:32.350047    4872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 08:30:32.358771    4872 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 08:30:32.358875    4872 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 08:30:32.366193    4872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 08:30:32.373798    4872 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 08:30:32.373865    4872 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 08:30:32.381452    4872 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 08:30:32.429110    4872 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1025 08:30:32.429173    4872 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 08:30:32.461968    4872 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1025 08:30:32.462113    4872 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1025 08:30:32.462157    4872 kubeadm.go:318] OS: Linux
	I1025 08:30:32.462208    4872 kubeadm.go:318] CGROUPS_CPU: enabled
	I1025 08:30:32.462267    4872 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1025 08:30:32.462319    4872 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1025 08:30:32.462373    4872 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1025 08:30:32.462450    4872 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1025 08:30:32.462503    4872 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1025 08:30:32.462555    4872 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1025 08:30:32.462610    4872 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1025 08:30:32.462665    4872 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1025 08:30:32.535970    4872 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 08:30:32.536090    4872 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 08:30:32.536190    4872 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 08:30:32.544683    4872 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 08:30:32.549204    4872 out.go:252]   - Generating certificates and keys ...
	I1025 08:30:32.549379    4872 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 08:30:32.549504    4872 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 08:30:33.773840    4872 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 08:30:34.489262    4872 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 08:30:34.872804    4872 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 08:30:35.270246    4872 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 08:30:35.502390    4872 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 08:30:35.502817    4872 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-468341 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1025 08:30:35.782346    4872 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 08:30:35.782647    4872 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-468341 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1025 08:30:36.107461    4872 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 08:30:36.656361    4872 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 08:30:37.052900    4872 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 08:30:37.053200    4872 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 08:30:37.667727    4872 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 08:30:39.071500    4872 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1025 08:30:40.040619    4872 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 08:30:41.358852    4872 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 08:30:41.548077    4872 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 08:30:41.548675    4872 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 08:30:41.551370    4872 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 08:30:41.554776    4872 out.go:252]   - Booting up control plane ...
	I1025 08:30:41.554889    4872 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 08:30:41.554971    4872 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 08:30:41.555041    4872 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 08:30:41.570563    4872 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 08:30:41.570932    4872 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1025 08:30:41.578075    4872 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1025 08:30:41.578425    4872 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 08:30:41.578474    4872 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1025 08:30:41.709640    4872 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1025 08:30:41.709758    4872 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1025 08:30:42.719465    4872 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.00996888s
	I1025 08:30:42.723089    4872 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1025 08:30:42.723184    4872 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1025 08:30:42.723607    4872 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1025 08:30:42.723697    4872 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1025 08:30:45.364017    4872 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.640432899s
	I1025 08:30:46.830979    4872 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.107832901s
	I1025 08:30:48.726070    4872 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.002866075s
	I1025 08:30:48.749935    4872 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 08:30:48.774508    4872 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 08:30:48.789745    4872 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 08:30:48.790279    4872 kubeadm.go:318] [mark-control-plane] Marking the node addons-468341 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 08:30:48.804766    4872 kubeadm.go:318] [bootstrap-token] Using token: dj0hgz.vala652dlb0y3ydo
	I1025 08:30:48.807784    4872 out.go:252]   - Configuring RBAC rules ...
	I1025 08:30:48.807936    4872 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 08:30:48.814496    4872 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 08:30:48.826919    4872 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 08:30:48.836286    4872 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 08:30:48.840632    4872 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 08:30:48.844849    4872 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 08:30:49.137123    4872 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 08:30:49.621741    4872 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1025 08:30:50.132738    4872 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1025 08:30:50.134435    4872 kubeadm.go:318] 
	I1025 08:30:50.134515    4872 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1025 08:30:50.134521    4872 kubeadm.go:318] 
	I1025 08:30:50.134597    4872 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1025 08:30:50.134602    4872 kubeadm.go:318] 
	I1025 08:30:50.134627    4872 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1025 08:30:50.134685    4872 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 08:30:50.134736    4872 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 08:30:50.134740    4872 kubeadm.go:318] 
	I1025 08:30:50.134810    4872 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1025 08:30:50.134815    4872 kubeadm.go:318] 
	I1025 08:30:50.134861    4872 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 08:30:50.134866    4872 kubeadm.go:318] 
	I1025 08:30:50.134917    4872 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1025 08:30:50.134990    4872 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 08:30:50.135057    4872 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 08:30:50.135062    4872 kubeadm.go:318] 
	I1025 08:30:50.135145    4872 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 08:30:50.135220    4872 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1025 08:30:50.135225    4872 kubeadm.go:318] 
	I1025 08:30:50.135308    4872 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token dj0hgz.vala652dlb0y3ydo \
	I1025 08:30:50.135409    4872 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:e54eda66e4b00d0a990c315bf332782d8922fe6750f8b2bc2791b23f9457095b \
	I1025 08:30:50.135430    4872 kubeadm.go:318] 	--control-plane 
	I1025 08:30:50.135434    4872 kubeadm.go:318] 
	I1025 08:30:50.135517    4872 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1025 08:30:50.135522    4872 kubeadm.go:318] 
	I1025 08:30:50.135603    4872 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token dj0hgz.vala652dlb0y3ydo \
	I1025 08:30:50.135704    4872 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:e54eda66e4b00d0a990c315bf332782d8922fe6750f8b2bc2791b23f9457095b 
	I1025 08:30:50.138930    4872 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1025 08:30:50.139166    4872 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1025 08:30:50.139275    4872 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 08:30:50.139292    4872 cni.go:84] Creating CNI manager for ""
	I1025 08:30:50.139299    4872 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 08:30:50.142550    4872 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1025 08:30:50.145511    4872 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 08:30:50.149264    4872 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1025 08:30:50.149285    4872 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1025 08:30:50.166111    4872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 08:30:50.445957    4872 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 08:30:50.446084    4872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 08:30:50.446118    4872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-468341 minikube.k8s.io/updated_at=2025_10_25T08_30_50_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c3de5ce1075e776fa55a26fe8396669cc53a4373 minikube.k8s.io/name=addons-468341 minikube.k8s.io/primary=true
	I1025 08:30:50.639309    4872 ops.go:34] apiserver oom_adj: -16
	I1025 08:30:50.639413    4872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 08:30:51.139905    4872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 08:30:51.639554    4872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 08:30:52.140047    4872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 08:30:52.640091    4872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 08:30:53.139854    4872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 08:30:53.640464    4872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 08:30:54.140467    4872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 08:30:54.640436    4872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 08:30:54.744684    4872 kubeadm.go:1113] duration metric: took 4.298659374s to wait for elevateKubeSystemPrivileges
	I1025 08:30:54.744711    4872 kubeadm.go:402] duration metric: took 22.475648638s to StartCluster
	I1025 08:30:54.744728    4872 settings.go:142] acquiring lock: {Name:mk0d8a31751f5e359928da7d7271367bd4b397fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:54.744840    4872 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21796-2312/kubeconfig
	I1025 08:30:54.745231    4872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/kubeconfig: {Name:mk29749a0edbb941c188588ea6aef25a517ab571 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:54.745422    4872 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 08:30:54.745561    4872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 08:30:54.745802    4872 config.go:182] Loaded profile config "addons-468341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:30:54.745832    4872 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1025 08:30:54.745918    4872 addons.go:69] Setting yakd=true in profile "addons-468341"
	I1025 08:30:54.745938    4872 addons.go:238] Setting addon yakd=true in "addons-468341"
	I1025 08:30:54.745959    4872 host.go:66] Checking if "addons-468341" exists ...
	I1025 08:30:54.746513    4872 cli_runner.go:164] Run: docker container inspect addons-468341 --format={{.State.Status}}
	I1025 08:30:54.746814    4872 addons.go:69] Setting inspektor-gadget=true in profile "addons-468341"
	I1025 08:30:54.746839    4872 addons.go:238] Setting addon inspektor-gadget=true in "addons-468341"
	I1025 08:30:54.746869    4872 host.go:66] Checking if "addons-468341" exists ...
	I1025 08:30:54.747271    4872 cli_runner.go:164] Run: docker container inspect addons-468341 --format={{.State.Status}}
	I1025 08:30:54.747521    4872 addons.go:69] Setting metrics-server=true in profile "addons-468341"
	I1025 08:30:54.747545    4872 addons.go:238] Setting addon metrics-server=true in "addons-468341"
	I1025 08:30:54.747569    4872 host.go:66] Checking if "addons-468341" exists ...
	I1025 08:30:54.748033    4872 cli_runner.go:164] Run: docker container inspect addons-468341 --format={{.State.Status}}
	I1025 08:30:54.751349    4872 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-468341"
	I1025 08:30:54.751667    4872 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-468341"
	I1025 08:30:54.751701    4872 host.go:66] Checking if "addons-468341" exists ...
	I1025 08:30:54.752141    4872 cli_runner.go:164] Run: docker container inspect addons-468341 --format={{.State.Status}}
	I1025 08:30:54.751512    4872 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-468341"
	I1025 08:30:54.755922    4872 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-468341"
	I1025 08:30:54.755997    4872 host.go:66] Checking if "addons-468341" exists ...
	I1025 08:30:54.751518    4872 addons.go:69] Setting cloud-spanner=true in profile "addons-468341"
	I1025 08:30:54.751524    4872 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-468341"
	I1025 08:30:54.751528    4872 addons.go:69] Setting default-storageclass=true in profile "addons-468341"
	I1025 08:30:54.751531    4872 addons.go:69] Setting gcp-auth=true in profile "addons-468341"
	I1025 08:30:54.751534    4872 addons.go:69] Setting ingress=true in profile "addons-468341"
	I1025 08:30:54.751537    4872 addons.go:69] Setting ingress-dns=true in profile "addons-468341"
	I1025 08:30:54.751590    4872 addons.go:69] Setting registry=true in profile "addons-468341"
	I1025 08:30:54.751595    4872 addons.go:69] Setting registry-creds=true in profile "addons-468341"
	I1025 08:30:54.751599    4872 addons.go:69] Setting storage-provisioner=true in profile "addons-468341"
	I1025 08:30:54.751602    4872 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-468341"
	I1025 08:30:54.751605    4872 addons.go:69] Setting volcano=true in profile "addons-468341"
	I1025 08:30:54.751608    4872 addons.go:69] Setting volumesnapshots=true in profile "addons-468341"
	I1025 08:30:54.751641    4872 out.go:179] * Verifying Kubernetes components...
	I1025 08:30:54.759989    4872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 08:30:54.763662    4872 addons.go:238] Setting addon ingress-dns=true in "addons-468341"
	I1025 08:30:54.763782    4872 host.go:66] Checking if "addons-468341" exists ...
	I1025 08:30:54.764337    4872 cli_runner.go:164] Run: docker container inspect addons-468341 --format={{.State.Status}}
	I1025 08:30:54.764702    4872 addons.go:238] Setting addon registry=true in "addons-468341"
	I1025 08:30:54.764770    4872 host.go:66] Checking if "addons-468341" exists ...
	I1025 08:30:54.774544    4872 cli_runner.go:164] Run: docker container inspect addons-468341 --format={{.State.Status}}
	I1025 08:30:54.764919    4872 addons.go:238] Setting addon registry-creds=true in "addons-468341"
	I1025 08:30:54.784826    4872 host.go:66] Checking if "addons-468341" exists ...
	I1025 08:30:54.764930    4872 addons.go:238] Setting addon storage-provisioner=true in "addons-468341"
	I1025 08:30:54.787270    4872 host.go:66] Checking if "addons-468341" exists ...
	I1025 08:30:54.787853    4872 cli_runner.go:164] Run: docker container inspect addons-468341 --format={{.State.Status}}
	I1025 08:30:54.764939    4872 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-468341"
	I1025 08:30:54.796460    4872 cli_runner.go:164] Run: docker container inspect addons-468341 --format={{.State.Status}}
	I1025 08:30:54.796862    4872 cli_runner.go:164] Run: docker container inspect addons-468341 --format={{.State.Status}}
	I1025 08:30:54.764953    4872 addons.go:238] Setting addon volcano=true in "addons-468341"
	I1025 08:30:54.803935    4872 host.go:66] Checking if "addons-468341" exists ...
	I1025 08:30:54.804533    4872 cli_runner.go:164] Run: docker container inspect addons-468341 --format={{.State.Status}}
	I1025 08:30:54.764959    4872 addons.go:238] Setting addon volumesnapshots=true in "addons-468341"
	I1025 08:30:54.819884    4872 host.go:66] Checking if "addons-468341" exists ...
	I1025 08:30:54.820408    4872 cli_runner.go:164] Run: docker container inspect addons-468341 --format={{.State.Status}}
	I1025 08:30:54.765004    4872 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-468341"
	I1025 08:30:54.849174    4872 host.go:66] Checking if "addons-468341" exists ...
	I1025 08:30:54.849641    4872 cli_runner.go:164] Run: docker container inspect addons-468341 --format={{.State.Status}}
	I1025 08:30:54.765417    4872 cli_runner.go:164] Run: docker container inspect addons-468341 --format={{.State.Status}}
	I1025 08:30:54.875695    4872 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1025 08:30:54.765426    4872 addons.go:238] Setting addon cloud-spanner=true in "addons-468341"
	I1025 08:30:54.765439    4872 mustload.go:65] Loading cluster: addons-468341
	I1025 08:30:54.765450    4872 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-468341"
	I1025 08:30:54.765459    4872 addons.go:238] Setting addon ingress=true in "addons-468341"
	I1025 08:30:54.884387    4872 host.go:66] Checking if "addons-468341" exists ...
	I1025 08:30:54.885033    4872 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1025 08:30:54.897134    4872 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1025 08:30:54.897219    4872 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1025 08:30:54.897315    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:30:54.902301    4872 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-468341"
	I1025 08:30:54.922603    4872 host.go:66] Checking if "addons-468341" exists ...
	I1025 08:30:54.923154    4872 cli_runner.go:164] Run: docker container inspect addons-468341 --format={{.State.Status}}
	I1025 08:30:54.936393    4872 host.go:66] Checking if "addons-468341" exists ...
	I1025 08:30:54.937024    4872 cli_runner.go:164] Run: docker container inspect addons-468341 --format={{.State.Status}}
	I1025 08:30:54.943717    4872 cli_runner.go:164] Run: docker container inspect addons-468341 --format={{.State.Status}}
	I1025 08:30:54.948118    4872 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 08:30:54.970133    4872 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 08:30:54.970158    4872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 08:30:54.970248    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:30:54.948326    4872 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1025 08:30:54.995322    4872 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1025 08:30:54.995392    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:30:54.948362    4872 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1025 08:30:55.033447    4872 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1025 08:30:55.033477    4872 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1025 08:30:55.033557    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:30:55.051166    4872 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1025 08:30:54.966530    4872 config.go:182] Loaded profile config "addons-468341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:30:55.061907    4872 cli_runner.go:164] Run: docker container inspect addons-468341 --format={{.State.Status}}
	I1025 08:30:54.966813    4872 cli_runner.go:164] Run: docker container inspect addons-468341 --format={{.State.Status}}
	I1025 08:30:55.061167    4872 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1025 08:30:55.061221    4872 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1025 08:30:55.083831    4872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1025 08:30:55.083902    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	W1025 08:30:55.061439    4872 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1025 08:30:55.101653    4872 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1025 08:30:55.101678    4872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1025 08:30:55.101758    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:30:55.132368    4872 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1025 08:30:55.136201    4872 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1025 08:30:55.137849    4872 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1025 08:30:55.138217    4872 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1025 08:30:55.140282    4872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1025 08:30:55.140370    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:30:55.163271    4872 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1025 08:30:55.163299    4872 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1025 08:30:55.163367    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:30:55.189852    4872 out.go:179]   - Using image docker.io/registry:3.0.0
	I1025 08:30:55.192228    4872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/addons-468341/id_rsa Username:docker}
	I1025 08:30:55.198183    4872 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1025 08:30:55.198208    4872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1025 08:30:55.198291    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:30:55.214177    4872 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1025 08:30:55.214426    4872 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1025 08:30:55.218228    4872 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1025 08:30:55.222124    4872 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1025 08:30:55.222150    4872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1025 08:30:55.222227    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:30:55.240420    4872 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1025 08:30:55.243715    4872 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1025 08:30:55.246827    4872 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1025 08:30:55.250657    4872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/addons-468341/id_rsa Username:docker}
	I1025 08:30:55.258060    4872 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1025 08:30:55.258139    4872 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1025 08:30:55.258155    4872 out.go:179]   - Using image docker.io/busybox:stable
	I1025 08:30:55.258229    4872 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1025 08:30:55.264356    4872 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1025 08:30:55.264378    4872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1025 08:30:55.264522    4872 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1025 08:30:55.264662    4872 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1025 08:30:55.264672    4872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1025 08:30:55.264735    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:30:55.264925    4872 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1025 08:30:55.265084    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:30:55.295976    4872 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1025 08:30:55.297416    4872 host.go:66] Checking if "addons-468341" exists ...
	I1025 08:30:55.309549    4872 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1025 08:30:55.316270    4872 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1025 08:30:55.316292    4872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1025 08:30:55.316373    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:30:55.352953    4872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/addons-468341/id_rsa Username:docker}
	I1025 08:30:55.358957    4872 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1025 08:30:55.363040    4872 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1025 08:30:55.363067    4872 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1025 08:30:55.363147    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:30:55.381442    4872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/addons-468341/id_rsa Username:docker}
	I1025 08:30:55.383247    4872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/addons-468341/id_rsa Username:docker}
	I1025 08:30:55.384139    4872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/addons-468341/id_rsa Username:docker}
	I1025 08:30:55.384643    4872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/addons-468341/id_rsa Username:docker}
	I1025 08:30:55.415905    4872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/addons-468341/id_rsa Username:docker}
	I1025 08:30:55.438605    4872 addons.go:238] Setting addon default-storageclass=true in "addons-468341"
	I1025 08:30:55.438647    4872 host.go:66] Checking if "addons-468341" exists ...
	I1025 08:30:55.439071    4872 cli_runner.go:164] Run: docker container inspect addons-468341 --format={{.State.Status}}
	I1025 08:30:55.457815    4872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 08:30:55.458103    4872 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 08:30:55.497621    4872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/addons-468341/id_rsa Username:docker}
	I1025 08:30:55.498722    4872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/addons-468341/id_rsa Username:docker}
	I1025 08:30:55.499794    4872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/addons-468341/id_rsa Username:docker}
	I1025 08:30:55.505813    4872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/addons-468341/id_rsa Username:docker}
	I1025 08:30:55.508387    4872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/addons-468341/id_rsa Username:docker}
	I1025 08:30:55.522112    4872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/addons-468341/id_rsa Username:docker}
	I1025 08:30:55.533695    4872 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 08:30:55.533715    4872 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 08:30:55.533777    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:30:55.574686    4872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/addons-468341/id_rsa Username:docker}
	I1025 08:30:55.937304    4872 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1025 08:30:55.937330    4872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1025 08:30:55.998374    4872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1025 08:30:56.047206    4872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1025 08:30:56.063495    4872 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1025 08:30:56.063526    4872 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1025 08:30:56.104433    4872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1025 08:30:56.111561    4872 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:30:56.111585    4872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1025 08:30:56.121701    4872 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1025 08:30:56.121728    4872 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1025 08:30:56.189433    4872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1025 08:30:56.203211    4872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 08:30:56.213914    4872 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 08:30:56.213938    4872 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1025 08:30:56.218649    4872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1025 08:30:56.246517    4872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1025 08:30:56.273796    4872 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1025 08:30:56.273821    4872 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1025 08:30:56.305896    4872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1025 08:30:56.352669    4872 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1025 08:30:56.352774    4872 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1025 08:30:56.412107    4872 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1025 08:30:56.412196    4872 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1025 08:30:56.415273    4872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:30:56.431546    4872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 08:30:56.499455    4872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 08:30:56.529552    4872 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1025 08:30:56.529629    4872 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1025 08:30:56.549398    4872 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1025 08:30:56.549502    4872 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1025 08:30:56.552126    4872 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1025 08:30:56.552197    4872 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1025 08:30:56.654178    4872 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1025 08:30:56.654277    4872 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1025 08:30:56.735595    4872 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1025 08:30:56.735679    4872 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1025 08:30:56.753089    4872 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1025 08:30:56.753174    4872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1025 08:30:56.789295    4872 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1025 08:30:56.789374    4872 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1025 08:30:56.888624    4872 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1025 08:30:56.888712    4872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1025 08:30:56.912954    4872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1025 08:30:57.008826    4872 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1025 08:30:57.008908    4872 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1025 08:30:57.102104    4872 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1025 08:30:57.102207    4872 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1025 08:30:57.139818    4872 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1025 08:30:57.139846    4872 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1025 08:30:57.147506    4872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1025 08:30:57.244513    4872 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1025 08:30:57.244599    4872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1025 08:30:57.259419    4872 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1025 08:30:57.259443    4872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1025 08:30:57.402869    4872 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.944712903s)
	I1025 08:30:57.402961    4872 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.945072461s)
	I1025 08:30:57.402983    4872 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1025 08:30:57.404548    4872 node_ready.go:35] waiting up to 6m0s for node "addons-468341" to be "Ready" ...
	I1025 08:30:57.550738    4872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1025 08:30:57.556191    4872 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1025 08:30:57.556216    4872 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1025 08:30:57.622560    4872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.624149965s)
	I1025 08:30:57.622630    4872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.57540325s)
	I1025 08:30:57.816126    4872 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1025 08:30:57.816149    4872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1025 08:30:57.906960    4872 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-468341" context rescaled to 1 replicas
	I1025 08:30:58.005904    4872 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1025 08:30:58.005928    4872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1025 08:30:58.131476    4872 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1025 08:30:58.131506    4872 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1025 08:30:58.244951    4872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1025 08:30:59.473610    4872 node_ready.go:57] node "addons-468341" has "Ready":"False" status (will retry)
	I1025 08:30:59.800859    4872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.69639027s)
	I1025 08:30:59.800928    4872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.611470994s)
	I1025 08:31:00.475994    4872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.272750943s)
	I1025 08:31:01.324086    4872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.105404176s)
	I1025 08:31:01.324615    4872 addons.go:479] Verifying addon ingress=true in "addons-468341"
	I1025 08:31:01.324212    4872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.07767063s)
	I1025 08:31:01.324239    4872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (5.018263649s)
	I1025 08:31:01.324298    4872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.909004284s)
	W1025 08:31:01.324864    4872 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:01.324896    4872 retry.go:31] will retry after 323.692618ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:01.324312    4872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.892747197s)
	I1025 08:31:01.324360    4872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.824873829s)
	I1025 08:31:01.324977    4872 addons.go:479] Verifying addon metrics-server=true in "addons-468341"
	I1025 08:31:01.324380    4872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.411332362s)
	I1025 08:31:01.324988    4872 addons.go:479] Verifying addon registry=true in "addons-468341"
	I1025 08:31:01.324409    4872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.176884827s)
	I1025 08:31:01.324479    4872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.773715492s)
	W1025 08:31:01.325810    4872 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1025 08:31:01.325827    4872 retry.go:31] will retry after 181.32551ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1025 08:31:01.329079    4872 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-468341 service yakd-dashboard -n yakd-dashboard
	
	I1025 08:31:01.329229    4872 out.go:179] * Verifying registry addon...
	I1025 08:31:01.329273    4872 out.go:179] * Verifying ingress addon...
	I1025 08:31:01.333747    4872 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1025 08:31:01.335338    4872 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1025 08:31:01.342365    4872 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1025 08:31:01.342384    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1025 08:31:01.348735    4872 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1025 08:31:01.349094    4872 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1025 08:31:01.349132    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:01.507862    4872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1025 08:31:01.605791    4872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.360760737s)
	I1025 08:31:01.605867    4872 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-468341"
	I1025 08:31:01.609218    4872 out.go:179] * Verifying csi-hostpath-driver addon...
	I1025 08:31:01.613541    4872 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1025 08:31:01.623840    4872 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1025 08:31:01.623865    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:01.649141    4872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:31:01.839462    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:01.839689    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 08:31:01.912925    4872 node_ready.go:57] node "addons-468341" has "Ready":"False" status (will retry)
	I1025 08:31:02.117756    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:02.339673    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:02.339919    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:02.617111    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:02.838493    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:02.838653    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:02.908100    4872 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1025 08:31:02.908192    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:31:02.927519    4872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/addons-468341/id_rsa Username:docker}
	I1025 08:31:03.042942    4872 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1025 08:31:03.055827    4872 addons.go:238] Setting addon gcp-auth=true in "addons-468341"
	I1025 08:31:03.055871    4872 host.go:66] Checking if "addons-468341" exists ...
	I1025 08:31:03.056337    4872 cli_runner.go:164] Run: docker container inspect addons-468341 --format={{.State.Status}}
	I1025 08:31:03.074101    4872 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1025 08:31:03.074155    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:31:03.094787    4872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/addons-468341/id_rsa Username:docker}
	I1025 08:31:03.118667    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:03.338052    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:03.338482    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:03.617346    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:03.837724    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:03.839386    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:04.117864    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:04.280816    4872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.772906338s)
	I1025 08:31:04.280943    4872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.631776634s)
	W1025 08:31:04.280966    4872 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:04.280982    4872 retry.go:31] will retry after 371.165011ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:04.281016    4872 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.206893205s)
	I1025 08:31:04.284217    4872 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1025 08:31:04.287138    4872 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1025 08:31:04.289973    4872 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1025 08:31:04.290010    4872 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1025 08:31:04.302876    4872 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1025 08:31:04.302897    4872 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1025 08:31:04.315804    4872 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1025 08:31:04.315827    4872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1025 08:31:04.329120    4872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1025 08:31:04.338434    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:04.338845    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1025 08:31:04.410366    4872 node_ready.go:57] node "addons-468341" has "Ready":"False" status (will retry)
	I1025 08:31:04.617429    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:04.652788    4872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:31:04.845518    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:04.851962    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:04.878589    4872 addons.go:479] Verifying addon gcp-auth=true in "addons-468341"
	I1025 08:31:04.881611    4872 out.go:179] * Verifying gcp-auth addon...
	I1025 08:31:04.885227    4872 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1025 08:31:04.946450    4872 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1025 08:31:04.946489    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:05.117321    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:05.340342    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:05.341320    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:05.388795    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 08:31:05.607419    4872 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:05.607451    4872 retry.go:31] will retry after 827.139043ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:05.617394    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:05.837742    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:05.838553    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:05.888468    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:06.117113    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:06.337262    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:06.337913    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:06.388837    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:06.435360    4872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:31:06.617298    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:06.837670    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:06.839192    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:06.888497    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 08:31:06.907915    4872 node_ready.go:57] node "addons-468341" has "Ready":"False" status (will retry)
	I1025 08:31:07.116798    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 08:31:07.242309    4872 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:07.242337    4872 retry.go:31] will retry after 1.164224313s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:07.338297    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:07.338580    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:07.388601    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:07.617299    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:07.837442    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:07.838555    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:07.888764    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:08.116552    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:08.337296    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:08.338228    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:08.388624    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:08.407161    4872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:31:08.630411    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:08.837609    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:08.840400    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:08.888658    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:09.116894    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 08:31:09.209359    4872 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:09.209435    4872 retry.go:31] will retry after 1.876878779s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:09.337470    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:09.338304    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:09.388919    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 08:31:09.407578    4872 node_ready.go:57] node "addons-468341" has "Ready":"False" status (will retry)
	I1025 08:31:09.616735    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:09.837613    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:09.838724    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:09.888744    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:10.117323    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:10.338521    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:10.339040    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:10.388981    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:10.616726    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:10.837021    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:10.839069    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:10.888804    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:11.086827    4872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:31:11.117288    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:11.337598    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:11.340136    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:11.389163    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 08:31:11.408543    4872 node_ready.go:57] node "addons-468341" has "Ready":"False" status (will retry)
	I1025 08:31:11.616736    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:11.837577    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:11.838148    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:11.888135    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 08:31:11.897885    4872 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:11.897916    4872 retry.go:31] will retry after 2.028497252s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:12.116658    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:12.336509    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:12.338824    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:12.388844    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:12.617272    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:12.837272    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:12.838491    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:12.888384    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:13.117282    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:13.337290    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:13.338175    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:13.388817    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:13.616426    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:13.837727    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:13.839301    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:13.888203    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 08:31:13.907980    4872 node_ready.go:57] node "addons-468341" has "Ready":"False" status (will retry)
	I1025 08:31:13.927276    4872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:31:14.116217    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:14.338967    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:14.340070    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:14.389452    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:14.617676    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 08:31:14.739971    4872 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:14.740012    4872 retry.go:31] will retry after 2.23204681s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:14.837018    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:14.839192    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:14.887849    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:15.117571    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:15.337241    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:15.339103    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:15.388596    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:15.617160    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:15.837222    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:15.838570    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:15.888445    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 08:31:15.908329    4872 node_ready.go:57] node "addons-468341" has "Ready":"False" status (will retry)
	I1025 08:31:16.117322    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:16.337889    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:16.338856    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:16.388780    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:16.617326    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:16.837199    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:16.838395    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:16.888094    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:16.972776    4872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:31:17.117231    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:17.338883    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:17.339856    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:17.388979    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:17.618768    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 08:31:17.807683    4872 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:17.807776    4872 retry.go:31] will retry after 2.568380534s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:17.836622    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:17.839103    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:17.888248    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:18.117303    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:18.337076    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:18.338485    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:18.388713    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 08:31:18.407231    4872 node_ready.go:57] node "addons-468341" has "Ready":"False" status (will retry)
	I1025 08:31:18.617373    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:18.838910    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:18.839324    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:18.887938    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:19.116905    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:19.336878    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:19.339110    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:19.389019    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:19.617378    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:19.837631    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:19.838827    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:19.888545    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:20.117670    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:20.338754    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:20.339087    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:20.377155    4872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:31:20.388311    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 08:31:20.408220    4872 node_ready.go:57] node "addons-468341" has "Ready":"False" status (will retry)
	I1025 08:31:20.618358    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:20.840319    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:20.841410    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:20.888523    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:21.117222    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 08:31:21.202864    4872 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:21.202902    4872 retry.go:31] will retry after 8.402756938s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:21.336988    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:21.339196    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:21.388201    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:21.617220    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:21.837477    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:21.839023    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:21.888800    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:22.117004    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:22.336891    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:22.338978    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:22.388712    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:22.616529    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:22.837363    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:22.838786    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:22.888370    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 08:31:22.908364    4872 node_ready.go:57] node "addons-468341" has "Ready":"False" status (will retry)
	I1025 08:31:23.117142    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:23.337705    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:23.338517    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:23.388247    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:23.616287    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:23.837413    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:23.838865    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:23.889058    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:24.116987    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:24.337108    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:24.338036    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:24.388673    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:24.616744    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:24.836942    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:24.839286    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:24.888163    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:25.117329    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:25.337286    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:25.339004    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:25.388689    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 08:31:25.407272    4872 node_ready.go:57] node "addons-468341" has "Ready":"False" status (will retry)
	I1025 08:31:25.617136    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:25.837135    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:25.838860    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:25.888795    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:26.117522    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:26.337158    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:26.338980    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:26.388817    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:26.617243    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:26.836853    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:26.838175    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:26.887892    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:27.116865    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:27.338186    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:27.339436    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:27.388097    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 08:31:27.408037    4872 node_ready.go:57] node "addons-468341" has "Ready":"False" status (will retry)
	I1025 08:31:27.617022    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:27.838555    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:27.838719    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:27.888434    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:28.117609    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:28.336839    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:28.339390    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:28.388848    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:28.616812    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:28.837008    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:28.839343    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:28.888530    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:29.117367    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:29.337041    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:29.338458    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:29.388171    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 08:31:29.408205    4872 node_ready.go:57] node "addons-468341" has "Ready":"False" status (will retry)
	I1025 08:31:29.606490    4872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:31:29.617110    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:29.839315    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:29.839795    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:29.888918    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:30.117357    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:30.339164    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:30.339615    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:30.389037    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 08:31:30.489174    4872 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:30.489207    4872 retry.go:31] will retry after 8.946405924s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:30.617476    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:30.838109    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:30.838759    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:30.888764    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:31.116619    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:31.338537    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:31.338037    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:31.389791    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 08:31:31.408601    4872 node_ready.go:57] node "addons-468341" has "Ready":"False" status (will retry)
	I1025 08:31:31.616751    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:31.836677    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:31.839114    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:31.888849    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:32.116376    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:32.337582    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:32.338331    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:32.389100    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:32.616866    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:32.836553    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:32.838553    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:32.888412    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:33.116795    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:33.337724    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:33.338969    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:33.388832    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:33.616626    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:33.837443    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:33.839858    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:33.888461    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 08:31:33.908371    4872 node_ready.go:57] node "addons-468341" has "Ready":"False" status (will retry)
	I1025 08:31:34.117896    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:34.337350    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:34.339769    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:34.388636    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:34.616791    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:34.836715    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:34.838951    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:34.888774    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:35.116816    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:35.337931    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:35.338497    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:35.388583    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:35.617614    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:35.837856    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:35.838688    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:35.888243    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:36.117298    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:36.341828    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:36.342306    4872 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1025 08:31:36.342371    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:36.392928    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:36.418408    4872 node_ready.go:49] node "addons-468341" is "Ready"
	I1025 08:31:36.418484    4872 node_ready.go:38] duration metric: took 39.013899863s for node "addons-468341" to be "Ready" ...
	I1025 08:31:36.418521    4872 api_server.go:52] waiting for apiserver process to appear ...
	I1025 08:31:36.418607    4872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 08:31:36.437095    4872 api_server.go:72] duration metric: took 41.69164123s to wait for apiserver process to appear ...
	I1025 08:31:36.437120    4872 api_server.go:88] waiting for apiserver healthz status ...
	I1025 08:31:36.437175    4872 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1025 08:31:36.448230    4872 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1025 08:31:36.449282    4872 api_server.go:141] control plane version: v1.34.1
	I1025 08:31:36.449322    4872 api_server.go:131] duration metric: took 12.194486ms to wait for apiserver health ...
	I1025 08:31:36.449331    4872 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 08:31:36.459279    4872 system_pods.go:59] 19 kube-system pods found
	I1025 08:31:36.459323    4872 system_pods.go:61] "coredns-66bc5c9577-dh6v4" [a83a218d-bfcd-4174-955e-eeb9264cb12f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 08:31:36.459366    4872 system_pods.go:61] "csi-hostpath-attacher-0" [adca97a9-465e-4053-8da2-1647455bd10d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1025 08:31:36.459382    4872 system_pods.go:61] "csi-hostpath-resizer-0" [7efafc69-47a7-4d91-9d6e-660223b9207b] Pending
	I1025 08:31:36.459388    4872 system_pods.go:61] "csi-hostpathplugin-wm2b7" [829d1b6b-726d-4eb4-b18e-0e6b86c1755d] Pending
	I1025 08:31:36.459398    4872 system_pods.go:61] "etcd-addons-468341" [9f5fd6e8-de6d-49b1-b102-383a1814fab7] Running
	I1025 08:31:36.459403    4872 system_pods.go:61] "kindnet-rb4dc" [ff4c343e-ba3a-4ceb-adc7-a42c595072c7] Running
	I1025 08:31:36.459408    4872 system_pods.go:61] "kube-apiserver-addons-468341" [29bef051-6bb7-49fc-8e31-25ffc0ace270] Running
	I1025 08:31:36.459429    4872 system_pods.go:61] "kube-controller-manager-addons-468341" [b34bd6e5-2d40-4ad2-a111-ee861c618f57] Running
	I1025 08:31:36.459441    4872 system_pods.go:61] "kube-ingress-dns-minikube" [fef98f06-32c6-44e6-8a25-dce9feb2bc80] Pending
	I1025 08:31:36.459446    4872 system_pods.go:61] "kube-proxy-58zqr" [3d51ef2f-f60c-41f7-a794-69cb67431709] Running
	I1025 08:31:36.459451    4872 system_pods.go:61] "kube-scheduler-addons-468341" [890a9dc4-f6dc-4545-a4bb-15356976c393] Running
	I1025 08:31:36.459459    4872 system_pods.go:61] "metrics-server-85b7d694d7-rqmn4" [1709dfd1-357e-496a-98b7-205be9cae357] Pending
	I1025 08:31:36.459465    4872 system_pods.go:61] "nvidia-device-plugin-daemonset-w5ht9" [05248aa9-d292-4130-b10d-c632220baebb] Pending
	I1025 08:31:36.459476    4872 system_pods.go:61] "registry-6b586f9694-bl9lz" [1d570e3f-1a7f-47f7-9a56-92f7a27efe03] Pending
	I1025 08:31:36.459483    4872 system_pods.go:61] "registry-creds-764b6fb674-q5vpt" [0ac9c422-007f-4643-8aaa-fa94a38fc826] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1025 08:31:36.459493    4872 system_pods.go:61] "registry-proxy-xjrqf" [8a4de784-ff9c-48be-a85b-956955a98f06] Pending
	I1025 08:31:36.459518    4872 system_pods.go:61] "snapshot-controller-7d9fbc56b8-brfpz" [b6abd38b-f9b9-445e-b732-967716b4219d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 08:31:36.459528    4872 system_pods.go:61] "snapshot-controller-7d9fbc56b8-kwsbg" [9f0cc055-bbc4-44b1-b9dd-41670fa5d058] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 08:31:36.459536    4872 system_pods.go:61] "storage-provisioner" [18b717a7-f9d4-4696-9839-6564fcdc4480] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 08:31:36.459546    4872 system_pods.go:74] duration metric: took 10.208732ms to wait for pod list to return data ...
	I1025 08:31:36.459561    4872 default_sa.go:34] waiting for default service account to be created ...
	I1025 08:31:36.479484    4872 default_sa.go:45] found service account: "default"
	I1025 08:31:36.479515    4872 default_sa.go:55] duration metric: took 19.946627ms for default service account to be created ...
	I1025 08:31:36.479525    4872 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 08:31:36.565777    4872 system_pods.go:86] 19 kube-system pods found
	I1025 08:31:36.565813    4872 system_pods.go:89] "coredns-66bc5c9577-dh6v4" [a83a218d-bfcd-4174-955e-eeb9264cb12f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 08:31:36.565822    4872 system_pods.go:89] "csi-hostpath-attacher-0" [adca97a9-465e-4053-8da2-1647455bd10d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1025 08:31:36.565884    4872 system_pods.go:89] "csi-hostpath-resizer-0" [7efafc69-47a7-4d91-9d6e-660223b9207b] Pending
	I1025 08:31:36.565889    4872 system_pods.go:89] "csi-hostpathplugin-wm2b7" [829d1b6b-726d-4eb4-b18e-0e6b86c1755d] Pending
	I1025 08:31:36.565893    4872 system_pods.go:89] "etcd-addons-468341" [9f5fd6e8-de6d-49b1-b102-383a1814fab7] Running
	I1025 08:31:36.565906    4872 system_pods.go:89] "kindnet-rb4dc" [ff4c343e-ba3a-4ceb-adc7-a42c595072c7] Running
	I1025 08:31:36.565911    4872 system_pods.go:89] "kube-apiserver-addons-468341" [29bef051-6bb7-49fc-8e31-25ffc0ace270] Running
	I1025 08:31:36.565928    4872 system_pods.go:89] "kube-controller-manager-addons-468341" [b34bd6e5-2d40-4ad2-a111-ee861c618f57] Running
	I1025 08:31:36.565942    4872 system_pods.go:89] "kube-ingress-dns-minikube" [fef98f06-32c6-44e6-8a25-dce9feb2bc80] Pending
	I1025 08:31:36.565946    4872 system_pods.go:89] "kube-proxy-58zqr" [3d51ef2f-f60c-41f7-a794-69cb67431709] Running
	I1025 08:31:36.565971    4872 system_pods.go:89] "kube-scheduler-addons-468341" [890a9dc4-f6dc-4545-a4bb-15356976c393] Running
	I1025 08:31:36.566000    4872 system_pods.go:89] "metrics-server-85b7d694d7-rqmn4" [1709dfd1-357e-496a-98b7-205be9cae357] Pending
	I1025 08:31:36.566006    4872 system_pods.go:89] "nvidia-device-plugin-daemonset-w5ht9" [05248aa9-d292-4130-b10d-c632220baebb] Pending
	I1025 08:31:36.566009    4872 system_pods.go:89] "registry-6b586f9694-bl9lz" [1d570e3f-1a7f-47f7-9a56-92f7a27efe03] Pending
	I1025 08:31:36.566023    4872 system_pods.go:89] "registry-creds-764b6fb674-q5vpt" [0ac9c422-007f-4643-8aaa-fa94a38fc826] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1025 08:31:36.566028    4872 system_pods.go:89] "registry-proxy-xjrqf" [8a4de784-ff9c-48be-a85b-956955a98f06] Pending
	I1025 08:31:36.566043    4872 system_pods.go:89] "snapshot-controller-7d9fbc56b8-brfpz" [b6abd38b-f9b9-445e-b732-967716b4219d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 08:31:36.566050    4872 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kwsbg" [9f0cc055-bbc4-44b1-b9dd-41670fa5d058] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 08:31:36.566057    4872 system_pods.go:89] "storage-provisioner" [18b717a7-f9d4-4696-9839-6564fcdc4480] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 08:31:36.566082    4872 retry.go:31] will retry after 271.33589ms: missing components: kube-dns
	I1025 08:31:36.694257    4872 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1025 08:31:36.694283    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:36.847515    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:36.849293    4872 system_pods.go:86] 19 kube-system pods found
	I1025 08:31:36.849325    4872 system_pods.go:89] "coredns-66bc5c9577-dh6v4" [a83a218d-bfcd-4174-955e-eeb9264cb12f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 08:31:36.849334    4872 system_pods.go:89] "csi-hostpath-attacher-0" [adca97a9-465e-4053-8da2-1647455bd10d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1025 08:31:36.849364    4872 system_pods.go:89] "csi-hostpath-resizer-0" [7efafc69-47a7-4d91-9d6e-660223b9207b] Pending
	I1025 08:31:36.849378    4872 system_pods.go:89] "csi-hostpathplugin-wm2b7" [829d1b6b-726d-4eb4-b18e-0e6b86c1755d] Pending
	I1025 08:31:36.849382    4872 system_pods.go:89] "etcd-addons-468341" [9f5fd6e8-de6d-49b1-b102-383a1814fab7] Running
	I1025 08:31:36.849386    4872 system_pods.go:89] "kindnet-rb4dc" [ff4c343e-ba3a-4ceb-adc7-a42c595072c7] Running
	I1025 08:31:36.849398    4872 system_pods.go:89] "kube-apiserver-addons-468341" [29bef051-6bb7-49fc-8e31-25ffc0ace270] Running
	I1025 08:31:36.849403    4872 system_pods.go:89] "kube-controller-manager-addons-468341" [b34bd6e5-2d40-4ad2-a111-ee861c618f57] Running
	I1025 08:31:36.849408    4872 system_pods.go:89] "kube-ingress-dns-minikube" [fef98f06-32c6-44e6-8a25-dce9feb2bc80] Pending
	I1025 08:31:36.849417    4872 system_pods.go:89] "kube-proxy-58zqr" [3d51ef2f-f60c-41f7-a794-69cb67431709] Running
	I1025 08:31:36.849421    4872 system_pods.go:89] "kube-scheduler-addons-468341" [890a9dc4-f6dc-4545-a4bb-15356976c393] Running
	I1025 08:31:36.849440    4872 system_pods.go:89] "metrics-server-85b7d694d7-rqmn4" [1709dfd1-357e-496a-98b7-205be9cae357] Pending
	I1025 08:31:36.849447    4872 system_pods.go:89] "nvidia-device-plugin-daemonset-w5ht9" [05248aa9-d292-4130-b10d-c632220baebb] Pending
	I1025 08:31:36.849451    4872 system_pods.go:89] "registry-6b586f9694-bl9lz" [1d570e3f-1a7f-47f7-9a56-92f7a27efe03] Pending
	I1025 08:31:36.849457    4872 system_pods.go:89] "registry-creds-764b6fb674-q5vpt" [0ac9c422-007f-4643-8aaa-fa94a38fc826] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1025 08:31:36.849461    4872 system_pods.go:89] "registry-proxy-xjrqf" [8a4de784-ff9c-48be-a85b-956955a98f06] Pending
	I1025 08:31:36.849480    4872 system_pods.go:89] "snapshot-controller-7d9fbc56b8-brfpz" [b6abd38b-f9b9-445e-b732-967716b4219d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 08:31:36.849493    4872 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kwsbg" [9f0cc055-bbc4-44b1-b9dd-41670fa5d058] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 08:31:36.849500    4872 system_pods.go:89] "storage-provisioner" [18b717a7-f9d4-4696-9839-6564fcdc4480] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 08:31:36.849519    4872 retry.go:31] will retry after 377.583974ms: missing components: kube-dns
	I1025 08:31:36.849610    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:36.893747    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:37.125374    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:37.252921    4872 system_pods.go:86] 19 kube-system pods found
	I1025 08:31:37.252957    4872 system_pods.go:89] "coredns-66bc5c9577-dh6v4" [a83a218d-bfcd-4174-955e-eeb9264cb12f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 08:31:37.252995    4872 system_pods.go:89] "csi-hostpath-attacher-0" [adca97a9-465e-4053-8da2-1647455bd10d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1025 08:31:37.253011    4872 system_pods.go:89] "csi-hostpath-resizer-0" [7efafc69-47a7-4d91-9d6e-660223b9207b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1025 08:31:37.253019    4872 system_pods.go:89] "csi-hostpathplugin-wm2b7" [829d1b6b-726d-4eb4-b18e-0e6b86c1755d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1025 08:31:37.253028    4872 system_pods.go:89] "etcd-addons-468341" [9f5fd6e8-de6d-49b1-b102-383a1814fab7] Running
	I1025 08:31:37.253034    4872 system_pods.go:89] "kindnet-rb4dc" [ff4c343e-ba3a-4ceb-adc7-a42c595072c7] Running
	I1025 08:31:37.253039    4872 system_pods.go:89] "kube-apiserver-addons-468341" [29bef051-6bb7-49fc-8e31-25ffc0ace270] Running
	I1025 08:31:37.253044    4872 system_pods.go:89] "kube-controller-manager-addons-468341" [b34bd6e5-2d40-4ad2-a111-ee861c618f57] Running
	I1025 08:31:37.253066    4872 system_pods.go:89] "kube-ingress-dns-minikube" [fef98f06-32c6-44e6-8a25-dce9feb2bc80] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1025 08:31:37.253077    4872 system_pods.go:89] "kube-proxy-58zqr" [3d51ef2f-f60c-41f7-a794-69cb67431709] Running
	I1025 08:31:37.253082    4872 system_pods.go:89] "kube-scheduler-addons-468341" [890a9dc4-f6dc-4545-a4bb-15356976c393] Running
	I1025 08:31:37.253088    4872 system_pods.go:89] "metrics-server-85b7d694d7-rqmn4" [1709dfd1-357e-496a-98b7-205be9cae357] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 08:31:37.253098    4872 system_pods.go:89] "nvidia-device-plugin-daemonset-w5ht9" [05248aa9-d292-4130-b10d-c632220baebb] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1025 08:31:37.253119    4872 system_pods.go:89] "registry-6b586f9694-bl9lz" [1d570e3f-1a7f-47f7-9a56-92f7a27efe03] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1025 08:31:37.253126    4872 system_pods.go:89] "registry-creds-764b6fb674-q5vpt" [0ac9c422-007f-4643-8aaa-fa94a38fc826] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1025 08:31:37.253159    4872 system_pods.go:89] "registry-proxy-xjrqf" [8a4de784-ff9c-48be-a85b-956955a98f06] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1025 08:31:37.253172    4872 system_pods.go:89] "snapshot-controller-7d9fbc56b8-brfpz" [b6abd38b-f9b9-445e-b732-967716b4219d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 08:31:37.253180    4872 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kwsbg" [9f0cc055-bbc4-44b1-b9dd-41670fa5d058] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 08:31:37.253195    4872 system_pods.go:89] "storage-provisioner" [18b717a7-f9d4-4696-9839-6564fcdc4480] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 08:31:37.253207    4872 system_pods.go:126] duration metric: took 773.676098ms to wait for k8s-apps to be running ...
	I1025 08:31:37.253216    4872 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 08:31:37.253285    4872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 08:31:37.278350    4872 system_svc.go:56] duration metric: took 25.123686ms WaitForService to wait for kubelet
	I1025 08:31:37.278376    4872 kubeadm.go:586] duration metric: took 42.532926727s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 08:31:37.278396    4872 node_conditions.go:102] verifying NodePressure condition ...
	I1025 08:31:37.300522    4872 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 08:31:37.300562    4872 node_conditions.go:123] node cpu capacity is 2
	I1025 08:31:37.300574    4872 node_conditions.go:105] duration metric: took 22.172858ms to run NodePressure ...
	I1025 08:31:37.300606    4872 start.go:241] waiting for startup goroutines ...
	I1025 08:31:37.353539    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:37.354058    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:37.458109    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:37.617503    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:37.841918    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:37.849776    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:37.891968    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:38.118759    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:38.341832    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:38.344393    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:38.388763    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:38.617531    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:38.837200    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:38.839445    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:38.888853    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:39.117305    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:39.338758    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:39.339060    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:39.389108    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:39.436293    4872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:31:39.617033    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:39.837308    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:39.843162    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:39.888013    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:40.117358    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:40.337138    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:40.339681    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:40.388139    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:40.586237    4872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.149907989s)
	W1025 08:31:40.586283    4872 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:40.586311    4872 retry.go:31] will retry after 17.295065096s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:40.618857    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:40.836940    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:40.839469    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:40.889093    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:41.117886    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:41.337107    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:41.339245    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:41.388337    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:41.617691    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:41.837797    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:41.838736    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:41.888972    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:42.118061    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:42.339318    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:42.349443    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:42.389258    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:42.617972    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:42.837114    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:42.838227    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:42.888386    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:43.116696    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:43.337514    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:43.339696    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:43.389140    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:43.618679    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:43.838480    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:43.840464    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:43.888682    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:44.117917    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:44.339979    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:44.340359    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:44.388589    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:44.617017    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:44.837138    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:44.839545    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:44.888443    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:45.118553    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:45.354121    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:45.354329    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:45.390929    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:45.617363    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:45.837884    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:45.839344    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:45.889880    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:46.117019    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:46.337411    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:46.340137    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:46.388905    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:46.616931    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:46.836651    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:46.839391    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:46.889354    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:47.117559    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:47.336506    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:47.338986    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:47.388845    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:47.617493    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:47.837474    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:47.839247    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:47.889659    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:48.117177    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:48.337182    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:48.339932    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:48.389754    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:48.617537    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:48.840010    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:48.841194    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:48.889423    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:49.118336    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:49.337788    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:49.338625    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:49.389481    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:49.617738    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:49.837201    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:49.840126    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:49.893546    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:50.117885    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:50.338240    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:50.339775    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:50.389090    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:50.616858    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:50.839330    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:50.839636    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:50.937610    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:51.117959    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:51.338829    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:51.339796    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:51.389141    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:51.618089    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:51.837611    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:51.840557    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:51.898464    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:52.118103    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:52.337608    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:52.339338    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:52.388633    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:52.617455    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:52.839389    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:52.839826    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:52.892607    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:53.117140    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:53.337610    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:53.339711    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:53.388736    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:53.617567    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:53.853248    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:53.853661    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:53.893349    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:54.117521    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:54.338959    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:54.339169    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:54.388036    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:54.617284    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:54.837831    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:54.839238    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:54.891217    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:55.118774    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:55.338296    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:55.338924    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:55.389598    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:55.618147    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:55.837468    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:55.838163    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:55.888228    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:56.118065    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:56.342668    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:56.343210    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:56.440310    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:56.618292    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:56.839063    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:56.840180    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:56.888382    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:57.117479    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:57.339704    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:57.340358    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:57.388508    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:57.621086    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:57.840141    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:57.840617    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:57.881917    4872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:31:57.888315    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:58.117263    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:58.339089    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:58.339520    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:58.439810    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:58.619334    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:58.839832    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:58.840308    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:58.888292    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:59.118873    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:59.183919    4872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.301922043s)
	W1025 08:31:59.183950    4872 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:59.183967    4872 retry.go:31] will retry after 12.216152943s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:59.339204    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:59.339830    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:59.389522    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:59.617581    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:59.836733    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:59.839182    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:59.888329    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:00.135833    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:00.355791    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:00.356009    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:32:00.400517    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:00.617756    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:00.836661    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:32:00.838792    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:00.888519    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:01.117498    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:01.339395    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:32:01.339901    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:01.389071    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:01.619734    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:01.836819    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:32:01.839817    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:01.888723    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:02.117622    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:02.339584    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:02.340865    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:32:02.388839    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:02.617824    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:02.838581    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:32:02.839520    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:02.888751    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:03.117277    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:03.338551    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:32:03.338990    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:03.389430    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:03.617508    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:03.840072    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:32:03.840545    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:03.888581    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:04.117484    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:04.336825    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:32:04.339846    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:04.388941    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:04.618254    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:04.838798    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:32:04.839293    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:04.890046    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:05.118009    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:05.339614    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:32:05.340208    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:05.388430    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:05.619342    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:05.839804    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:32:05.840149    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:05.889793    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:06.118033    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:06.339534    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:06.339546    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:32:06.388775    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:06.624677    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:06.838588    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:32:06.842459    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:06.890095    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:07.121130    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:07.341274    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:07.341655    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:32:07.390229    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:07.618570    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:07.837307    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:32:07.840060    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:07.889550    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:08.117171    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:08.339183    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:32:08.339255    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:08.439731    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:08.619659    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:08.840759    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:32:08.844751    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:08.890624    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:09.118615    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:09.339987    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:09.341054    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:32:09.390426    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:09.620043    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:09.845345    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:32:09.845759    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:09.891451    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:10.122308    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:10.340078    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:10.340423    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:32:10.388804    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:10.617739    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:10.837059    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:32:10.839735    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:10.888205    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:11.118498    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:11.336959    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:32:11.339546    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:11.390472    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:11.400809    4872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:32:11.618491    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:11.843967    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:32:11.844388    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:11.943618    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:12.128608    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 08:32:12.314640    4872 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:32:12.314672    4872 retry.go:31] will retry after 41.515236646s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:32:12.338562    4872 kapi.go:107] duration metric: took 1m11.004812987s to wait for kubernetes.io/minikube-addons=registry ...
	I1025 08:32:12.339136    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:12.388281    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:12.617956    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:12.838915    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:12.889311    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:13.117764    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:13.339482    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:13.389482    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:13.618295    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:13.839487    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:13.888336    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:14.117386    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:14.339048    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:14.389157    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:14.617411    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:14.840684    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:14.888119    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:15.117203    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:15.342753    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:15.390614    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:15.617079    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:15.839880    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:15.889190    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:16.120421    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:16.339218    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:16.388934    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:16.617487    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:16.838945    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:16.889296    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:17.118084    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:17.339911    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:17.439966    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:17.617057    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:17.839964    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:17.889227    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:18.117569    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:18.339158    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:18.389451    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:18.618913    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:18.839821    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:18.890628    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:19.117875    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:19.339621    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:19.389083    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:19.617842    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:19.839679    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:19.888979    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:20.118007    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:20.339607    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:20.391004    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:20.618099    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:20.839739    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:20.888768    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:21.117782    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:21.339033    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:21.389139    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:21.620318    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:21.839907    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:21.888798    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:22.117948    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:22.340389    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:22.388756    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:22.617544    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:22.840220    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:22.888249    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:23.117190    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:23.338462    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:23.388487    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:23.617257    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:23.839078    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:23.889305    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:24.119163    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:24.340172    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:24.388644    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:24.619499    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:24.840499    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:24.890486    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:25.117970    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:25.340372    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:25.393603    4872 kapi.go:107] duration metric: took 1m20.508374791s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1025 08:32:25.398083    4872 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-468341 cluster.
	I1025 08:32:25.402089    4872 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1025 08:32:25.406096    4872 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1025 08:32:25.617743    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:25.839713    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:26.117598    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:26.338472    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:26.617188    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:26.838242    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:27.116585    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:27.339421    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:27.616952    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:27.839652    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:28.125394    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:28.339139    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:28.617836    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:28.840759    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:29.117125    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:29.338121    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:29.618048    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:29.841016    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:30.121844    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:30.341217    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:30.617919    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:30.839459    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:31.123597    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:31.339296    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:31.617119    4872 kapi.go:107] duration metric: took 1m30.003578477s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1025 08:32:31.838920    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:32.338856    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:32.838152    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:33.338830    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:33.839599    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:34.338997    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:34.838362    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:35.339055    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:35.839555    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:36.339290    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:36.838724    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:37.339842    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:37.839698    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:38.339668    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:38.839504    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:39.339417    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:39.839282    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:40.339444    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:40.839735    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:41.339480    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:41.839791    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:42.339803    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:42.839797    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:43.339202    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:43.840412    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:44.340304    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:44.838871    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:45.339676    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:45.838707    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:46.339170    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:46.839176    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:47.339607    4872 kapi.go:107] duration metric: took 1m46.004264503s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1025 08:32:53.832692    4872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1025 08:32:54.635487    4872 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1025 08:32:54.635580    4872 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1025 08:32:54.639080    4872 out.go:179] * Enabled addons: nvidia-device-plugin, amd-gpu-device-plugin, ingress-dns, cloud-spanner, storage-provisioner, registry-creds, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I1025 08:32:54.641952    4872 addons.go:514] duration metric: took 1m59.896093667s for enable addons: enabled=[nvidia-device-plugin amd-gpu-device-plugin ingress-dns cloud-spanner storage-provisioner registry-creds metrics-server yakd storage-provisioner-rancher volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I1025 08:32:54.642026    4872 start.go:246] waiting for cluster config update ...
	I1025 08:32:54.642051    4872 start.go:255] writing updated cluster config ...
	I1025 08:32:54.643026    4872 ssh_runner.go:195] Run: rm -f paused
	I1025 08:32:54.647881    4872 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 08:32:54.651424    4872 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dh6v4" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:32:54.656184    4872 pod_ready.go:94] pod "coredns-66bc5c9577-dh6v4" is "Ready"
	I1025 08:32:54.656206    4872 pod_ready.go:86] duration metric: took 4.751135ms for pod "coredns-66bc5c9577-dh6v4" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:32:54.658529    4872 pod_ready.go:83] waiting for pod "etcd-addons-468341" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:32:54.662977    4872 pod_ready.go:94] pod "etcd-addons-468341" is "Ready"
	I1025 08:32:54.663008    4872 pod_ready.go:86] duration metric: took 4.450589ms for pod "etcd-addons-468341" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:32:54.665465    4872 pod_ready.go:83] waiting for pod "kube-apiserver-addons-468341" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:32:54.669962    4872 pod_ready.go:94] pod "kube-apiserver-addons-468341" is "Ready"
	I1025 08:32:54.670019    4872 pod_ready.go:86] duration metric: took 4.53237ms for pod "kube-apiserver-addons-468341" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:32:54.672232    4872 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-468341" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:32:55.051546    4872 pod_ready.go:94] pod "kube-controller-manager-addons-468341" is "Ready"
	I1025 08:32:55.051577    4872 pod_ready.go:86] duration metric: took 379.317529ms for pod "kube-controller-manager-addons-468341" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:32:55.252518    4872 pod_ready.go:83] waiting for pod "kube-proxy-58zqr" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:32:55.662725    4872 pod_ready.go:94] pod "kube-proxy-58zqr" is "Ready"
	I1025 08:32:55.662768    4872 pod_ready.go:86] duration metric: took 410.223671ms for pod "kube-proxy-58zqr" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:32:55.851901    4872 pod_ready.go:83] waiting for pod "kube-scheduler-addons-468341" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:32:56.252030    4872 pod_ready.go:94] pod "kube-scheduler-addons-468341" is "Ready"
	I1025 08:32:56.252118    4872 pod_ready.go:86] duration metric: took 400.19293ms for pod "kube-scheduler-addons-468341" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:32:56.252139    4872 pod_ready.go:40] duration metric: took 1.604221337s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 08:32:56.318873    4872 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1025 08:32:56.322044    4872 out.go:179] * Done! kubectl is now configured to use "addons-468341" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 25 08:35:49 addons-468341 crio[829]: time="2025-10-25T08:35:49.033498544Z" level=info msg="Removed container d533edfa08d18a3f496a5a3134238448a89988e66e1bb9af35fa27e9fd0271c9: kube-system/registry-creds-764b6fb674-q5vpt/registry-creds" id=194ffffe-ded2-4a21-a00c-fbf0c5322749 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 08:36:02 addons-468341 crio[829]: time="2025-10-25T08:36:02.690164685Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-mlr57/POD" id=6b2e5796-f71e-4b34-b67d-6d5dc484a889 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 08:36:02 addons-468341 crio[829]: time="2025-10-25T08:36:02.690245876Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 08:36:02 addons-468341 crio[829]: time="2025-10-25T08:36:02.702088852Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-mlr57 Namespace:default ID:d38e7e761f9c8d98d23a565887a77a5f92ebd894cd3d9ab17706db8ae42db0c0 UID:af13902b-40c8-4b5c-951b-00a518af8c62 NetNS:/var/run/netns/8d0a626f-be59-486d-90fc-45e2fdb069b8 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000bdb58}] Aliases:map[]}"
	Oct 25 08:36:02 addons-468341 crio[829]: time="2025-10-25T08:36:02.702256567Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-mlr57 to CNI network \"kindnet\" (type=ptp)"
	Oct 25 08:36:02 addons-468341 crio[829]: time="2025-10-25T08:36:02.730215573Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-mlr57 Namespace:default ID:d38e7e761f9c8d98d23a565887a77a5f92ebd894cd3d9ab17706db8ae42db0c0 UID:af13902b-40c8-4b5c-951b-00a518af8c62 NetNS:/var/run/netns/8d0a626f-be59-486d-90fc-45e2fdb069b8 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000bdb58}] Aliases:map[]}"
	Oct 25 08:36:02 addons-468341 crio[829]: time="2025-10-25T08:36:02.730518905Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-mlr57 for CNI network kindnet (type=ptp)"
	Oct 25 08:36:02 addons-468341 crio[829]: time="2025-10-25T08:36:02.743004574Z" level=info msg="Ran pod sandbox d38e7e761f9c8d98d23a565887a77a5f92ebd894cd3d9ab17706db8ae42db0c0 with infra container: default/hello-world-app-5d498dc89-mlr57/POD" id=6b2e5796-f71e-4b34-b67d-6d5dc484a889 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 08:36:02 addons-468341 crio[829]: time="2025-10-25T08:36:02.744171076Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=37fc5dc7-7576-4df6-8ab1-63a1814ef205 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 08:36:02 addons-468341 crio[829]: time="2025-10-25T08:36:02.744295525Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=37fc5dc7-7576-4df6-8ab1-63a1814ef205 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 08:36:02 addons-468341 crio[829]: time="2025-10-25T08:36:02.744332005Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=37fc5dc7-7576-4df6-8ab1-63a1814ef205 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 08:36:02 addons-468341 crio[829]: time="2025-10-25T08:36:02.745137073Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=bd7994fb-c23b-4c40-8820-5f18e319e110 name=/runtime.v1.ImageService/PullImage
	Oct 25 08:36:02 addons-468341 crio[829]: time="2025-10-25T08:36:02.748741866Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Oct 25 08:36:03 addons-468341 crio[829]: time="2025-10-25T08:36:03.40976711Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=bd7994fb-c23b-4c40-8820-5f18e319e110 name=/runtime.v1.ImageService/PullImage
	Oct 25 08:36:03 addons-468341 crio[829]: time="2025-10-25T08:36:03.416276106Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=df9d25e6-e148-4672-a9d2-fbd62eb23c80 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 08:36:03 addons-468341 crio[829]: time="2025-10-25T08:36:03.420462398Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=a213af1f-d164-417f-b78f-7be958d01ed4 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 08:36:03 addons-468341 crio[829]: time="2025-10-25T08:36:03.431303297Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-mlr57/hello-world-app" id=cbf6018a-aeed-492f-86ea-c7ed3ae18692 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 08:36:03 addons-468341 crio[829]: time="2025-10-25T08:36:03.431460387Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 08:36:03 addons-468341 crio[829]: time="2025-10-25T08:36:03.454649131Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 08:36:03 addons-468341 crio[829]: time="2025-10-25T08:36:03.454884122Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/031393fb520de8d56001f1f594263e7f7157702842805732ca8f96ffab388bc8/merged/etc/passwd: no such file or directory"
	Oct 25 08:36:03 addons-468341 crio[829]: time="2025-10-25T08:36:03.45491444Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/031393fb520de8d56001f1f594263e7f7157702842805732ca8f96ffab388bc8/merged/etc/group: no such file or directory"
	Oct 25 08:36:03 addons-468341 crio[829]: time="2025-10-25T08:36:03.455194116Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 08:36:03 addons-468341 crio[829]: time="2025-10-25T08:36:03.492919451Z" level=info msg="Created container 71af332c9ae040b4541eff2a21e63a4054f52b407ee347914d59cf98bab56af3: default/hello-world-app-5d498dc89-mlr57/hello-world-app" id=cbf6018a-aeed-492f-86ea-c7ed3ae18692 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 08:36:03 addons-468341 crio[829]: time="2025-10-25T08:36:03.496302956Z" level=info msg="Starting container: 71af332c9ae040b4541eff2a21e63a4054f52b407ee347914d59cf98bab56af3" id=caa3f8f3-6e53-4063-a406-9f202fda8e81 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 08:36:03 addons-468341 crio[829]: time="2025-10-25T08:36:03.501867757Z" level=info msg="Started container" PID=7386 containerID=71af332c9ae040b4541eff2a21e63a4054f52b407ee347914d59cf98bab56af3 description=default/hello-world-app-5d498dc89-mlr57/hello-world-app id=caa3f8f3-6e53-4063-a406-9f202fda8e81 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d38e7e761f9c8d98d23a565887a77a5f92ebd894cd3d9ab17706db8ae42db0c0
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	71af332c9ae04       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        Less than a second ago   Running             hello-world-app                          0                   d38e7e761f9c8       hello-world-app-5d498dc89-mlr57             default
	1f63e79fa6386       a2fd0654e5baeec8de2209bfade13a0034e942e708fd2bbfce69bb26a3c02e14                                                                             16 seconds ago           Exited              registry-creds                           1                   db66669039599       registry-creds-764b6fb674-q5vpt             kube-system
	ccc3d6334d183       docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0                                              2 minutes ago            Running             nginx                                    0                   4fa59643bf17c       nginx                                       default
	eb2475edef4a2       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          3 minutes ago            Running             busybox                                  0                   b570b05b0f283       busybox                                     default
	bab10a56ba960       registry.k8s.io/ingress-nginx/controller@sha256:4ae52268a9493fc308d5f2fb67fe657d2499293aa644122d385ddb60c2330dbc                             3 minutes ago            Running             controller                               0                   ae2dc4b6a513f       ingress-nginx-controller-675c5ddd98-xp4f8   ingress-nginx
	5fd1e4aa2eaec       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago            Running             csi-snapshotter                          0                   49b310bb5cde1       csi-hostpathplugin-wm2b7                    kube-system
	5f5e2ff55f9b9       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago            Running             csi-provisioner                          0                   49b310bb5cde1       csi-hostpathplugin-wm2b7                    kube-system
	fde287f234591       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago            Running             liveness-probe                           0                   49b310bb5cde1       csi-hostpathplugin-wm2b7                    kube-system
	1c4a84678a48f       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago            Running             hostpath                                 0                   49b310bb5cde1       csi-hostpathplugin-wm2b7                    kube-system
	3976f771a2e1f       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago            Running             node-driver-registrar                    0                   49b310bb5cde1       csi-hostpathplugin-wm2b7                    kube-system
	be7c56ff634b9       9a80c0c8eb61cb88536fa58caaf18357fffd3e9fd0481b2781dfc6359f7654c9                                                                             3 minutes ago            Exited              patch                                    2                   2ab1bb1557333       ingress-nginx-admission-patch-jm6m8         ingress-nginx
	db5d3923d2801       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago            Running             gcp-auth                                 0                   9334c4716c221       gcp-auth-78565c9fb4-gdr72                   gcp-auth
	2bb61d4b205a8       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:f279436ecca5b88c20fd93c0d2a668ace136058ecad987e96e26014585e335b4                            3 minutes ago            Running             gadget                                   0                   e82859ead66ee       gadget-blz29                                gadget
	c675c035a6dbb       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   3 minutes ago            Running             csi-external-health-monitor-controller   0                   49b310bb5cde1       csi-hostpathplugin-wm2b7                    kube-system
	f4608f1e20335       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   75257ed111f61       nvidia-device-plugin-daemonset-w5ht9        kube-system
	a45310ef4d134       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   3 minutes ago            Exited              create                                   0                   5048757537bea       ingress-nginx-admission-create-wl2lj        ingress-nginx
	2cb17a3d4c7c6       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           3 minutes ago            Running             registry                                 0                   39c01edc5d9b1       registry-6b586f9694-bl9lz                   kube-system
	53785f6bf53a5       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             3 minutes ago            Running             local-path-provisioner                   0                   e6a0cd13d50c3       local-path-provisioner-648f6765c9-52rwl     local-path-storage
	9dfa9f0508992       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        3 minutes ago            Running             metrics-server                           0                   45bf921736c9b       metrics-server-85b7d694d7-rqmn4             kube-system
	149fe53d8f125       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               3 minutes ago            Running             minikube-ingress-dns                     0                   0c7fe301535fb       kube-ingress-dns-minikube                   kube-system
	c0dd415aff39e       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              4 minutes ago            Running             csi-resizer                              0                   9948e9da76c8b       csi-hostpath-resizer-0                      kube-system
	3f2799b9c1b41       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      4 minutes ago            Running             volume-snapshot-controller               0                   dd6d9c046cdb9       snapshot-controller-7d9fbc56b8-brfpz        kube-system
	d658e0bd52e0d       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             4 minutes ago            Running             csi-attacher                             0                   caf5f3599aca0       csi-hostpath-attacher-0                     kube-system
	6abccf54d473f       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      4 minutes ago            Running             volume-snapshot-controller               0                   0c34ed7de6d2d       snapshot-controller-7d9fbc56b8-kwsbg        kube-system
	710588f58555a       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              4 minutes ago            Running             registry-proxy                           0                   6102355ede3b4       registry-proxy-xjrqf                        kube-system
	66f18df75c44d       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               4 minutes ago            Running             cloud-spanner-emulator                   0                   031b4c24d4c4e       cloud-spanner-emulator-86bd5cbb97-tkgt7     default
	93f2dd786953b       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              4 minutes ago            Running             yakd                                     0                   a55c68c16eb83       yakd-dashboard-5ff678cb9-6xxxr              yakd-dashboard
	990bc617d7987       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago            Running             storage-provisioner                      0                   d2b0670f61288       storage-provisioner                         kube-system
	3efcb5f51b3c4       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago            Running             coredns                                  0                   f159b8a028368       coredns-66bc5c9577-dh6v4                    kube-system
	6375e783d80c1       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             5 minutes ago            Running             kube-proxy                               0                   2157f4cd8cfb7       kube-proxy-58zqr                            kube-system
	11da55b7006a0       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             5 minutes ago            Running             kindnet-cni                              0                   47791e14fc49f       kindnet-rb4dc                               kube-system
	a8a4b543d2547       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             5 minutes ago            Running             etcd                                     0                   fa4aa28c09db7       etcd-addons-468341                          kube-system
	f40b4040bdb0c       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             5 minutes ago            Running             kube-scheduler                           0                   96e4c19d3e480       kube-scheduler-addons-468341                kube-system
	ca4b0c8b5bb6a       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             5 minutes ago            Running             kube-apiserver                           0                   ecd67cfbd72fa       kube-apiserver-addons-468341                kube-system
	2105d8a4af178       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             5 minutes ago            Running             kube-controller-manager                  0                   ad6372860b36c       kube-controller-manager-addons-468341       kube-system
	
	
	==> coredns [3efcb5f51b3c46b7526ad1ba08ebb8af07f5ac32cc61ce8aa211066d05af5549] <==
	[INFO] 10.244.0.5:53607 - 47810 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001726283s
	[INFO] 10.244.0.5:53607 - 43547 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000121551s
	[INFO] 10.244.0.5:53607 - 64570 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000075429s
	[INFO] 10.244.0.5:40270 - 31308 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000188825s
	[INFO] 10.244.0.5:40270 - 31070 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000254081s
	[INFO] 10.244.0.5:38054 - 58316 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000115061s
	[INFO] 10.244.0.5:38054 - 58497 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000294188s
	[INFO] 10.244.0.5:40217 - 62655 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000085367s
	[INFO] 10.244.0.5:40217 - 62235 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0000672s
	[INFO] 10.244.0.5:47154 - 16316 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001440021s
	[INFO] 10.244.0.5:47154 - 16111 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001542166s
	[INFO] 10.244.0.5:45851 - 13532 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000116784s
	[INFO] 10.244.0.5:45851 - 13111 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000186913s
	[INFO] 10.244.0.20:49885 - 49063 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000160173s
	[INFO] 10.244.0.20:58404 - 54588 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000105371s
	[INFO] 10.244.0.20:50653 - 22797 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000144214s
	[INFO] 10.244.0.20:53434 - 63846 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000084046s
	[INFO] 10.244.0.20:51048 - 20411 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000149285s
	[INFO] 10.244.0.20:55749 - 7642 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000082888s
	[INFO] 10.244.0.20:36920 - 60131 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001965266s
	[INFO] 10.244.0.20:42946 - 24582 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002484938s
	[INFO] 10.244.0.20:59197 - 18586 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00148757s
	[INFO] 10.244.0.20:42203 - 35136 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001190034s
	[INFO] 10.244.0.23:55057 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000181949s
	[INFO] 10.244.0.23:51115 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000126261s
	
	
	==> describe nodes <==
	Name:               addons-468341
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-468341
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3de5ce1075e776fa55a26fe8396669cc53a4373
	                    minikube.k8s.io/name=addons-468341
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T08_30_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-468341
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-468341"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 08:30:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-468341
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 08:35:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 08:35:56 +0000   Sat, 25 Oct 2025 08:30:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 08:35:56 +0000   Sat, 25 Oct 2025 08:30:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 08:35:56 +0000   Sat, 25 Oct 2025 08:30:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 08:35:56 +0000   Sat, 25 Oct 2025 08:31:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-468341
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                5c9900c3-5a2e-4ead-b0e4-60c2e9f9bb56
	  Boot ID:                    89aa6167-e700-462e-8969-7739a9f33dc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m7s
	  default                     cloud-spanner-emulator-86bd5cbb97-tkgt7      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m6s
	  default                     hello-world-app-5d498dc89-mlr57              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  gadget                      gadget-blz29                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m4s
	  gcp-auth                    gcp-auth-78565c9fb4-gdr72                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-xp4f8    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         5m3s
	  kube-system                 coredns-66bc5c9577-dh6v4                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     5m10s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m3s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m3s
	  kube-system                 csi-hostpathplugin-wm2b7                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m28s
	  kube-system                 etcd-addons-468341                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m15s
	  kube-system                 kindnet-rb4dc                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      5m10s
	  kube-system                 kube-apiserver-addons-468341                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m15s
	  kube-system                 kube-controller-manager-addons-468341        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m17s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m5s
	  kube-system                 kube-proxy-58zqr                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m10s
	  kube-system                 kube-scheduler-addons-468341                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m15s
	  kube-system                 metrics-server-85b7d694d7-rqmn4              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         5m5s
	  kube-system                 nvidia-device-plugin-daemonset-w5ht9         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m28s
	  kube-system                 registry-6b586f9694-bl9lz                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m5s
	  kube-system                 registry-creds-764b6fb674-q5vpt              0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m7s
	  kube-system                 registry-proxy-xjrqf                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m28s
	  kube-system                 snapshot-controller-7d9fbc56b8-brfpz         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m3s
	  kube-system                 snapshot-controller-7d9fbc56b8-kwsbg         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m3s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m4s
	  local-path-storage          local-path-provisioner-648f6765c9-52rwl      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m4s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-6xxxr               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     5m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 5m8s   kube-proxy       
	  Normal   Starting                 5m15s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m15s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m15s  kubelet          Node addons-468341 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m15s  kubelet          Node addons-468341 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m15s  kubelet          Node addons-468341 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m11s  node-controller  Node addons-468341 event: Registered Node addons-468341 in Controller
	  Normal   NodeReady                4m28s  kubelet          Node addons-468341 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct25 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014683] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497292] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033389] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.792499] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.870372] kauditd_printk_skb: 36 callbacks suppressed
	[Oct25 08:30] overlayfs: idmapped layers are currently not supported
	[  +0.060360] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [a8a4b543d2547f083b802e2c98710bcc2f07776dfc49c8c41bb357abf197d215] <==
	{"level":"warn","ts":"2025-10-25T08:30:45.379234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:45.402064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:45.415059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:45.432877Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:45.457490Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:45.467113Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:45.518277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:45.521635Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:45.526812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:45.542749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:45.558513Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:45.575968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:45.606920Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:45.618784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:45.640679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:45.666265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:45.700420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:45.716837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:45.784402Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:31:01.905434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:31:01.922891Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:31:23.706668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:31:23.733672Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:31:23.753642Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:31:23.770266Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51554","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [db5d3923d2801bf564674486be86cb41ac4d433e036ee2ab0b46330f811e3c2c] <==
	2025/10/25 08:32:24 GCP Auth Webhook started!
	2025/10/25 08:32:56 Ready to marshal response ...
	2025/10/25 08:32:56 Ready to write response ...
	2025/10/25 08:32:57 Ready to marshal response ...
	2025/10/25 08:32:57 Ready to write response ...
	2025/10/25 08:32:57 Ready to marshal response ...
	2025/10/25 08:32:57 Ready to write response ...
	2025/10/25 08:33:17 Ready to marshal response ...
	2025/10/25 08:33:17 Ready to write response ...
	2025/10/25 08:33:18 Ready to marshal response ...
	2025/10/25 08:33:18 Ready to write response ...
	2025/10/25 08:33:18 Ready to marshal response ...
	2025/10/25 08:33:18 Ready to write response ...
	2025/10/25 08:33:26 Ready to marshal response ...
	2025/10/25 08:33:26 Ready to write response ...
	2025/10/25 08:33:35 Ready to marshal response ...
	2025/10/25 08:33:35 Ready to write response ...
	2025/10/25 08:33:41 Ready to marshal response ...
	2025/10/25 08:33:41 Ready to write response ...
	2025/10/25 08:34:03 Ready to marshal response ...
	2025/10/25 08:34:03 Ready to write response ...
	2025/10/25 08:36:02 Ready to marshal response ...
	2025/10/25 08:36:02 Ready to write response ...
	
	
	==> kernel <==
	 08:36:04 up 18 min,  0 user,  load average: 0.20, 0.93, 0.55
	Linux addons-468341 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [11da55b7006a0c7ee4fdfa87ea50ea79c464ed85828bc75ef11dbecc5857aaa2] <==
	I1025 08:33:55.619157       1 main.go:301] handling current node
	I1025 08:34:05.617688       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:34:05.617725       1 main.go:301] handling current node
	I1025 08:34:15.618860       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:34:15.618935       1 main.go:301] handling current node
	I1025 08:34:25.617599       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:34:25.617652       1 main.go:301] handling current node
	I1025 08:34:35.617610       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:34:35.617641       1 main.go:301] handling current node
	I1025 08:34:45.617798       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:34:45.618020       1 main.go:301] handling current node
	I1025 08:34:55.617605       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:34:55.617638       1 main.go:301] handling current node
	I1025 08:35:05.618643       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:35:05.618679       1 main.go:301] handling current node
	I1025 08:35:15.618471       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:35:15.618508       1 main.go:301] handling current node
	I1025 08:35:25.626068       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:35:25.626101       1 main.go:301] handling current node
	I1025 08:35:35.624702       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:35:35.624815       1 main.go:301] handling current node
	I1025 08:35:45.626475       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:35:45.626509       1 main.go:301] handling current node
	I1025 08:35:55.619014       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:35:55.619058       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ca4b0c8b5bb6a41514380e8fbc131cfd3c0036a7467eef4ccf7772196d699ebd] <==
	E1025 08:32:01.340309       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1025 08:32:01.340323       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1025 08:32:01.341417       1 handler_proxy.go:99] no RequestInfo found in the context
	E1025 08:32:01.341495       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1025 08:32:01.341507       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1025 08:32:10.187015       1 handler_proxy.go:99] no RequestInfo found in the context
	E1025 08:32:10.187084       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1025 08:32:10.188340       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.132.206:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.132.206:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.132.206:443: connect: connection refused" logger="UnhandledError"
	E1025 08:32:10.189108       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.132.206:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.132.206:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.132.206:443: connect: connection refused" logger="UnhandledError"
	E1025 08:32:10.195009       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.132.206:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.132.206:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.132.206:443: connect: connection refused" logger="UnhandledError"
	I1025 08:32:10.333638       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1025 08:33:06.314868       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:44472: use of closed network connection
	E1025 08:33:06.533726       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:44484: use of closed network connection
	E1025 08:33:06.669089       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:44504: use of closed network connection
	I1025 08:33:41.773895       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1025 08:33:42.061822       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.144.85"}
	I1025 08:33:45.701061       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1025 08:33:47.590112       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1025 08:36:02.553770       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.106.113.25"}
	
	
	==> kube-controller-manager [2105d8a4af178f4195391e5b2d1ca656712cbf36e77394a9382e7c7a4b280b3c] <==
	I1025 08:30:53.713012       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1025 08:30:53.713062       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1025 08:30:53.713280       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1025 08:30:53.714404       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1025 08:30:53.714454       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 08:30:53.714660       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1025 08:30:53.714760       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 08:30:53.714866       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1025 08:30:53.714911       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1025 08:30:53.715298       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1025 08:30:53.715385       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1025 08:30:53.733370       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 08:30:53.733401       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 08:30:53.733409       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E1025 08:30:59.445922       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1025 08:31:23.699645       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1025 08:31:23.699810       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1025 08:31:23.699854       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1025 08:31:23.720778       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1025 08:31:23.726586       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1025 08:31:23.800516       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 08:31:23.826939       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 08:31:38.672612       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1025 08:31:53.806130       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1025 08:31:53.840023       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [6375e783d80c1ab19c196dea15de187c31ecf6231a480e3b354abf63f9986443] <==
	I1025 08:30:55.684549       1 server_linux.go:53] "Using iptables proxy"
	I1025 08:30:55.791775       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 08:30:55.892624       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 08:30:55.892708       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1025 08:30:55.892793       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 08:30:55.947398       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 08:30:55.947452       1 server_linux.go:132] "Using iptables Proxier"
	I1025 08:30:55.953607       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 08:30:55.965462       1 server.go:527] "Version info" version="v1.34.1"
	I1025 08:30:55.965490       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 08:30:55.966975       1 config.go:106] "Starting endpoint slice config controller"
	I1025 08:30:55.967000       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 08:30:55.967361       1 config.go:200] "Starting service config controller"
	I1025 08:30:55.967378       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 08:30:55.967721       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 08:30:55.967735       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 08:30:55.968166       1 config.go:309] "Starting node config controller"
	I1025 08:30:55.968172       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 08:30:55.968178       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 08:30:56.070548       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 08:30:56.070633       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 08:30:56.070955       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [f40b4040bdb0ca8f6ba539c3a1cc4570ad8fc2d470fe2a3ee7023680adca8179] <==
	E1025 08:30:46.831568       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 08:30:46.831740       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1025 08:30:46.831869       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 08:30:46.831871       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 08:30:46.831923       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1025 08:30:46.831973       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 08:30:46.832022       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 08:30:46.832068       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 08:30:46.832157       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 08:30:46.832169       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 08:30:46.832228       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 08:30:46.832277       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 08:30:46.832319       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 08:30:46.832356       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1025 08:30:46.832398       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 08:30:46.832474       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 08:30:46.832568       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 08:30:46.834166       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 08:30:47.637677       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 08:30:47.640113       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 08:30:47.684573       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1025 08:30:47.709377       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 08:30:47.797381       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 08:30:47.890598       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	I1025 08:30:49.812317       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 08:34:11 addons-468341 kubelet[1291]: I1025 08:34:11.662800    1291 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6905c0603603ada37568b3ed687fd3ceb7bb87cb291224dfe9200c9f8e7fdd06"} err="failed to get container status \"6905c0603603ada37568b3ed687fd3ceb7bb87cb291224dfe9200c9f8e7fdd06\": rpc error: code = NotFound desc = could not find container \"6905c0603603ada37568b3ed687fd3ceb7bb87cb291224dfe9200c9f8e7fdd06\": container with ID starting with 6905c0603603ada37568b3ed687fd3ceb7bb87cb291224dfe9200c9f8e7fdd06 not found: ID does not exist"
	Oct 25 08:34:11 addons-468341 kubelet[1291]: I1025 08:34:11.718300    1291 reconciler_common.go:299] "Volume detached for volume \"pvc-737f5650-f892-4a07-a48c-db48fcbd75a6\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^5bef4180-b17d-11f0-96f1-4e2b1f0c8204\") on node \"addons-468341\" DevicePath \"\""
	Oct 25 08:34:13 addons-468341 kubelet[1291]: I1025 08:34:13.545628    1291 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4081f0b6-60d2-4f7e-94f4-7ba4aebca472" path="/var/lib/kubelet/pods/4081f0b6-60d2-4f7e-94f4-7ba4aebca472/volumes"
	Oct 25 08:34:24 addons-468341 kubelet[1291]: I1025 08:34:24.542159    1291 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-bl9lz" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 08:34:37 addons-468341 kubelet[1291]: I1025 08:34:37.542911    1291 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-xjrqf" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 08:34:51 addons-468341 kubelet[1291]: I1025 08:34:51.542891    1291 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-w5ht9" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 08:35:27 addons-468341 kubelet[1291]: I1025 08:35:27.542986    1291 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-bl9lz" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 08:35:46 addons-468341 kubelet[1291]: I1025 08:35:46.343254    1291 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-q5vpt" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 08:35:46 addons-468341 kubelet[1291]: W1025 08:35:46.374061    1291 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/921bcbb16e37f18586dfcad891fd4e6c424ad8acf17dde69218b365d67adf111/crio-db666690395997e1933bd76a3d27511a05e65907d05438e57a753ee5dbd5c7cf WatchSource:0}: Error finding container db666690395997e1933bd76a3d27511a05e65907d05438e57a753ee5dbd5c7cf: Status 404 returned error can't find the container with id db666690395997e1933bd76a3d27511a05e65907d05438e57a753ee5dbd5c7cf
	Oct 25 08:35:47 addons-468341 kubelet[1291]: I1025 08:35:47.998554    1291 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-q5vpt" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 08:35:47 addons-468341 kubelet[1291]: I1025 08:35:47.999195    1291 scope.go:117] "RemoveContainer" containerID="d533edfa08d18a3f496a5a3134238448a89988e66e1bb9af35fa27e9fd0271c9"
	Oct 25 08:35:49 addons-468341 kubelet[1291]: I1025 08:35:49.006504    1291 scope.go:117] "RemoveContainer" containerID="d533edfa08d18a3f496a5a3134238448a89988e66e1bb9af35fa27e9fd0271c9"
	Oct 25 08:35:49 addons-468341 kubelet[1291]: I1025 08:35:49.007500    1291 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-q5vpt" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 08:35:49 addons-468341 kubelet[1291]: I1025 08:35:49.007670    1291 scope.go:117] "RemoveContainer" containerID="1f63e79fa638611b101374069496549a2618d764597844c8ff78234b0b6ddcec"
	Oct 25 08:35:49 addons-468341 kubelet[1291]: E1025 08:35:49.007903    1291 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-q5vpt_kube-system(0ac9c422-007f-4643-8aaa-fa94a38fc826)\"" pod="kube-system/registry-creds-764b6fb674-q5vpt" podUID="0ac9c422-007f-4643-8aaa-fa94a38fc826"
	Oct 25 08:35:49 addons-468341 kubelet[1291]: I1025 08:35:49.544170    1291 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-xjrqf" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 08:35:49 addons-468341 kubelet[1291]: E1025 08:35:49.680103    1291 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/78c21e923d2baaf7b01c951d79825a3de60188937742803b604bdad9cbaa021b/diff" to get inode usage: stat /var/lib/containers/storage/overlay/78c21e923d2baaf7b01c951d79825a3de60188937742803b604bdad9cbaa021b/diff: no such file or directory, extraDiskErr: <nil>
	Oct 25 08:35:50 addons-468341 kubelet[1291]: I1025 08:35:50.015939    1291 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-q5vpt" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 08:35:50 addons-468341 kubelet[1291]: I1025 08:35:50.015995    1291 scope.go:117] "RemoveContainer" containerID="1f63e79fa638611b101374069496549a2618d764597844c8ff78234b0b6ddcec"
	Oct 25 08:35:50 addons-468341 kubelet[1291]: E1025 08:35:50.016155    1291 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-q5vpt_kube-system(0ac9c422-007f-4643-8aaa-fa94a38fc826)\"" pod="kube-system/registry-creds-764b6fb674-q5vpt" podUID="0ac9c422-007f-4643-8aaa-fa94a38fc826"
	Oct 25 08:36:02 addons-468341 kubelet[1291]: I1025 08:36:02.523173    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzfw9\" (UniqueName: \"kubernetes.io/projected/af13902b-40c8-4b5c-951b-00a518af8c62-kube-api-access-tzfw9\") pod \"hello-world-app-5d498dc89-mlr57\" (UID: \"af13902b-40c8-4b5c-951b-00a518af8c62\") " pod="default/hello-world-app-5d498dc89-mlr57"
	Oct 25 08:36:02 addons-468341 kubelet[1291]: I1025 08:36:02.523240    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/af13902b-40c8-4b5c-951b-00a518af8c62-gcp-creds\") pod \"hello-world-app-5d498dc89-mlr57\" (UID: \"af13902b-40c8-4b5c-951b-00a518af8c62\") " pod="default/hello-world-app-5d498dc89-mlr57"
	Oct 25 08:36:02 addons-468341 kubelet[1291]: W1025 08:36:02.739721    1291 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/921bcbb16e37f18586dfcad891fd4e6c424ad8acf17dde69218b365d67adf111/crio-d38e7e761f9c8d98d23a565887a77a5f92ebd894cd3d9ab17706db8ae42db0c0 WatchSource:0}: Error finding container d38e7e761f9c8d98d23a565887a77a5f92ebd894cd3d9ab17706db8ae42db0c0: Status 404 returned error can't find the container with id d38e7e761f9c8d98d23a565887a77a5f92ebd894cd3d9ab17706db8ae42db0c0
	Oct 25 08:36:04 addons-468341 kubelet[1291]: I1025 08:36:04.542854    1291 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-q5vpt" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 08:36:04 addons-468341 kubelet[1291]: I1025 08:36:04.542931    1291 scope.go:117] "RemoveContainer" containerID="1f63e79fa638611b101374069496549a2618d764597844c8ff78234b0b6ddcec"
	
	
	==> storage-provisioner [990bc617d7987b2c3713ad5fcb4652f9f7697f9f8286262c1d0fbd61f03cb291] <==
	W1025 08:35:40.755342       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:35:42.758391       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:35:42.763091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:35:44.765864       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:35:44.777278       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:35:46.780396       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:35:46.785954       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:35:48.789593       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:35:48.794530       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:35:50.797348       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:35:50.801733       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:35:52.805081       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:35:52.809587       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:35:54.812794       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:35:54.824152       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:35:56.827441       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:35:56.834038       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:35:58.837335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:35:58.842348       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:36:00.845806       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:36:00.852426       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:36:02.857159       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:36:02.866244       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:36:04.870248       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:36:04.877503       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-468341 -n addons-468341
helpers_test.go:269: (dbg) Run:  kubectl --context addons-468341 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-wl2lj ingress-nginx-admission-patch-jm6m8
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-468341 describe pod ingress-nginx-admission-create-wl2lj ingress-nginx-admission-patch-jm6m8
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-468341 describe pod ingress-nginx-admission-create-wl2lj ingress-nginx-admission-patch-jm6m8: exit status 1 (111.217519ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-wl2lj" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-jm6m8" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-468341 describe pod ingress-nginx-admission-create-wl2lj ingress-nginx-admission-patch-jm6m8: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-468341 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-468341 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (269.175124ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 08:36:05.884204   14769 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:36:05.884428   14769 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:36:05.884456   14769 out.go:374] Setting ErrFile to fd 2...
	I1025 08:36:05.884474   14769 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:36:05.884774   14769 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-2312/.minikube/bin
	I1025 08:36:05.885123   14769 mustload.go:65] Loading cluster: addons-468341
	I1025 08:36:05.885571   14769 config.go:182] Loaded profile config "addons-468341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:36:05.885613   14769 addons.go:606] checking whether the cluster is paused
	I1025 08:36:05.885754   14769 config.go:182] Loaded profile config "addons-468341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:36:05.885788   14769 host.go:66] Checking if "addons-468341" exists ...
	I1025 08:36:05.886339   14769 cli_runner.go:164] Run: docker container inspect addons-468341 --format={{.State.Status}}
	I1025 08:36:05.917872   14769 ssh_runner.go:195] Run: systemctl --version
	I1025 08:36:05.917922   14769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:36:05.940321   14769 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/addons-468341/id_rsa Username:docker}
	I1025 08:36:06.044775   14769 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 08:36:06.044862   14769 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 08:36:06.077085   14769 cri.go:89] found id: "42ca72d4f4be1623424165b7a43e5f21bb48c575214ebd4c39eefa05e4e8399f"
	I1025 08:36:06.077110   14769 cri.go:89] found id: "5fd1e4aa2eaec9a625e731b2310d7e53df6715992458f7b34f24c564a0371c93"
	I1025 08:36:06.077115   14769 cri.go:89] found id: "5f5e2ff55f9b9d96f5647e7d06d29ba2d01ac53c5f3bebb14c41ac3b15d46dbe"
	I1025 08:36:06.077118   14769 cri.go:89] found id: "fde287f23459127cc0438aff73855d690ce348105c0984d314227ebb893c0f27"
	I1025 08:36:06.077122   14769 cri.go:89] found id: "1c4a84678a48fe97a0d5e107fffed343bbdb5797c60914bfc1787ef22ec8aab6"
	I1025 08:36:06.077126   14769 cri.go:89] found id: "3976f771a2e1f4333179d9b66178f2b8d0e5df890f3e3934fa6b45cecc30471a"
	I1025 08:36:06.077129   14769 cri.go:89] found id: "c675c035a6dbbaf595ca82e7e5494c65d80846fa4403b093199b14027e8a1667"
	I1025 08:36:06.077133   14769 cri.go:89] found id: "f4608f1e203354a663895813be5940b353d5999288ba102363d0fc2ab5bee267"
	I1025 08:36:06.077136   14769 cri.go:89] found id: "2cb17a3d4c7c6272eb0c5b9a7105721c94d5c62089aae769495e41fd49a9130b"
	I1025 08:36:06.077141   14769 cri.go:89] found id: "9dfa9f05089922f4312c61cbba94162087a4b0ccb02343ec9524bd69b39eec86"
	I1025 08:36:06.077145   14769 cri.go:89] found id: "149fe53d8f125d10a2d1e5e32c97a97f55639e392cf1a6745398ed8ac79e7d66"
	I1025 08:36:06.077148   14769 cri.go:89] found id: "c0dd415aff39e64402a3d64867549a7a3b05dfcf210f7e1f94409f91384fa997"
	I1025 08:36:06.077151   14769 cri.go:89] found id: "3f2799b9c1b41eaec314d09a1269f3e858b9982860af8c4168ed54a9280b011d"
	I1025 08:36:06.077154   14769 cri.go:89] found id: "d658e0bd52e0d32446762f4e3466c7f28b2e31c7a876c4caaddd5444b238f373"
	I1025 08:36:06.077157   14769 cri.go:89] found id: "6abccf54d473ff21bf8b7e7a67510c44ab418b13e2ded840ecd714d35bc33050"
	I1025 08:36:06.077164   14769 cri.go:89] found id: "710588f58555a870efb846b35789c67ba50b0c47c5b86a2c12d2ebe8b7d3cf14"
	I1025 08:36:06.077172   14769 cri.go:89] found id: "990bc617d7987b2c3713ad5fcb4652f9f7697f9f8286262c1d0fbd61f03cb291"
	I1025 08:36:06.077176   14769 cri.go:89] found id: "3efcb5f51b3c46b7526ad1ba08ebb8af07f5ac32cc61ce8aa211066d05af5549"
	I1025 08:36:06.077180   14769 cri.go:89] found id: "6375e783d80c1ab19c196dea15de187c31ecf6231a480e3b354abf63f9986443"
	I1025 08:36:06.077183   14769 cri.go:89] found id: "11da55b7006a0c7ee4fdfa87ea50ea79c464ed85828bc75ef11dbecc5857aaa2"
	I1025 08:36:06.077187   14769 cri.go:89] found id: "a8a4b543d2547f083b802e2c98710bcc2f07776dfc49c8c41bb357abf197d215"
	I1025 08:36:06.077190   14769 cri.go:89] found id: "f40b4040bdb0ca8f6ba539c3a1cc4570ad8fc2d470fe2a3ee7023680adca8179"
	I1025 08:36:06.077193   14769 cri.go:89] found id: "ca4b0c8b5bb6a41514380e8fbc131cfd3c0036a7467eef4ccf7772196d699ebd"
	I1025 08:36:06.077197   14769 cri.go:89] found id: "2105d8a4af178f4195391e5b2d1ca656712cbf36e77394a9382e7c7a4b280b3c"
	I1025 08:36:06.077205   14769 cri.go:89] found id: ""
	I1025 08:36:06.077256   14769 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 08:36:06.093898   14769 out.go:203] 
	W1025 08:36:06.096980   14769 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:36:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:36:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 08:36:06.097010   14769 out.go:285] * 
	* 
	W1025 08:36:06.100856   14769 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 08:36:06.103912   14769 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-468341 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-468341 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-468341 addons disable ingress --alsologtostderr -v=1: exit status 11 (257.252387ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 08:36:06.159137   14812 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:36:06.159363   14812 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:36:06.159392   14812 out.go:374] Setting ErrFile to fd 2...
	I1025 08:36:06.159412   14812 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:36:06.159710   14812 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-2312/.minikube/bin
	I1025 08:36:06.160037   14812 mustload.go:65] Loading cluster: addons-468341
	I1025 08:36:06.160567   14812 config.go:182] Loaded profile config "addons-468341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:36:06.160609   14812 addons.go:606] checking whether the cluster is paused
	I1025 08:36:06.160758   14812 config.go:182] Loaded profile config "addons-468341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:36:06.160790   14812 host.go:66] Checking if "addons-468341" exists ...
	I1025 08:36:06.161291   14812 cli_runner.go:164] Run: docker container inspect addons-468341 --format={{.State.Status}}
	I1025 08:36:06.179530   14812 ssh_runner.go:195] Run: systemctl --version
	I1025 08:36:06.179586   14812 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:36:06.201902   14812 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/addons-468341/id_rsa Username:docker}
	I1025 08:36:06.304392   14812 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 08:36:06.304493   14812 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 08:36:06.336919   14812 cri.go:89] found id: "42ca72d4f4be1623424165b7a43e5f21bb48c575214ebd4c39eefa05e4e8399f"
	I1025 08:36:06.336942   14812 cri.go:89] found id: "5fd1e4aa2eaec9a625e731b2310d7e53df6715992458f7b34f24c564a0371c93"
	I1025 08:36:06.336947   14812 cri.go:89] found id: "5f5e2ff55f9b9d96f5647e7d06d29ba2d01ac53c5f3bebb14c41ac3b15d46dbe"
	I1025 08:36:06.336951   14812 cri.go:89] found id: "fde287f23459127cc0438aff73855d690ce348105c0984d314227ebb893c0f27"
	I1025 08:36:06.336955   14812 cri.go:89] found id: "1c4a84678a48fe97a0d5e107fffed343bbdb5797c60914bfc1787ef22ec8aab6"
	I1025 08:36:06.336959   14812 cri.go:89] found id: "3976f771a2e1f4333179d9b66178f2b8d0e5df890f3e3934fa6b45cecc30471a"
	I1025 08:36:06.336962   14812 cri.go:89] found id: "c675c035a6dbbaf595ca82e7e5494c65d80846fa4403b093199b14027e8a1667"
	I1025 08:36:06.336965   14812 cri.go:89] found id: "f4608f1e203354a663895813be5940b353d5999288ba102363d0fc2ab5bee267"
	I1025 08:36:06.336969   14812 cri.go:89] found id: "2cb17a3d4c7c6272eb0c5b9a7105721c94d5c62089aae769495e41fd49a9130b"
	I1025 08:36:06.336975   14812 cri.go:89] found id: "9dfa9f05089922f4312c61cbba94162087a4b0ccb02343ec9524bd69b39eec86"
	I1025 08:36:06.336978   14812 cri.go:89] found id: "149fe53d8f125d10a2d1e5e32c97a97f55639e392cf1a6745398ed8ac79e7d66"
	I1025 08:36:06.336981   14812 cri.go:89] found id: "c0dd415aff39e64402a3d64867549a7a3b05dfcf210f7e1f94409f91384fa997"
	I1025 08:36:06.336986   14812 cri.go:89] found id: "3f2799b9c1b41eaec314d09a1269f3e858b9982860af8c4168ed54a9280b011d"
	I1025 08:36:06.336989   14812 cri.go:89] found id: "d658e0bd52e0d32446762f4e3466c7f28b2e31c7a876c4caaddd5444b238f373"
	I1025 08:36:06.336992   14812 cri.go:89] found id: "6abccf54d473ff21bf8b7e7a67510c44ab418b13e2ded840ecd714d35bc33050"
	I1025 08:36:06.336997   14812 cri.go:89] found id: "710588f58555a870efb846b35789c67ba50b0c47c5b86a2c12d2ebe8b7d3cf14"
	I1025 08:36:06.337000   14812 cri.go:89] found id: "990bc617d7987b2c3713ad5fcb4652f9f7697f9f8286262c1d0fbd61f03cb291"
	I1025 08:36:06.337005   14812 cri.go:89] found id: "3efcb5f51b3c46b7526ad1ba08ebb8af07f5ac32cc61ce8aa211066d05af5549"
	I1025 08:36:06.337008   14812 cri.go:89] found id: "6375e783d80c1ab19c196dea15de187c31ecf6231a480e3b354abf63f9986443"
	I1025 08:36:06.337011   14812 cri.go:89] found id: "11da55b7006a0c7ee4fdfa87ea50ea79c464ed85828bc75ef11dbecc5857aaa2"
	I1025 08:36:06.337016   14812 cri.go:89] found id: "a8a4b543d2547f083b802e2c98710bcc2f07776dfc49c8c41bb357abf197d215"
	I1025 08:36:06.337024   14812 cri.go:89] found id: "f40b4040bdb0ca8f6ba539c3a1cc4570ad8fc2d470fe2a3ee7023680adca8179"
	I1025 08:36:06.337027   14812 cri.go:89] found id: "ca4b0c8b5bb6a41514380e8fbc131cfd3c0036a7467eef4ccf7772196d699ebd"
	I1025 08:36:06.337031   14812 cri.go:89] found id: "2105d8a4af178f4195391e5b2d1ca656712cbf36e77394a9382e7c7a4b280b3c"
	I1025 08:36:06.337035   14812 cri.go:89] found id: ""
	I1025 08:36:06.337087   14812 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 08:36:06.351620   14812 out.go:203] 
	W1025 08:36:06.354589   14812 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:36:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:36:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 08:36:06.354622   14812 out.go:285] * 
	* 
	W1025 08:36:06.358481   14812 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 08:36:06.361490   14812 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-468341 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (144.89s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.26s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-blz29" [609f75cf-124c-4aa1-b0c2-892b50be3a46] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003258502s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-468341 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-468341 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (256.065754ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 08:33:41.274488   12727 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:33:41.274715   12727 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:33:41.274728   12727 out.go:374] Setting ErrFile to fd 2...
	I1025 08:33:41.274733   12727 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:33:41.274996   12727 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-2312/.minikube/bin
	I1025 08:33:41.275256   12727 mustload.go:65] Loading cluster: addons-468341
	I1025 08:33:41.275607   12727 config.go:182] Loaded profile config "addons-468341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:33:41.275624   12727 addons.go:606] checking whether the cluster is paused
	I1025 08:33:41.275723   12727 config.go:182] Loaded profile config "addons-468341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:33:41.275739   12727 host.go:66] Checking if "addons-468341" exists ...
	I1025 08:33:41.276190   12727 cli_runner.go:164] Run: docker container inspect addons-468341 --format={{.State.Status}}
	I1025 08:33:41.293031   12727 ssh_runner.go:195] Run: systemctl --version
	I1025 08:33:41.293097   12727 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:33:41.311977   12727 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/addons-468341/id_rsa Username:docker}
	I1025 08:33:41.416478   12727 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 08:33:41.416565   12727 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 08:33:41.444854   12727 cri.go:89] found id: "5fd1e4aa2eaec9a625e731b2310d7e53df6715992458f7b34f24c564a0371c93"
	I1025 08:33:41.444880   12727 cri.go:89] found id: "5f5e2ff55f9b9d96f5647e7d06d29ba2d01ac53c5f3bebb14c41ac3b15d46dbe"
	I1025 08:33:41.444887   12727 cri.go:89] found id: "fde287f23459127cc0438aff73855d690ce348105c0984d314227ebb893c0f27"
	I1025 08:33:41.444891   12727 cri.go:89] found id: "1c4a84678a48fe97a0d5e107fffed343bbdb5797c60914bfc1787ef22ec8aab6"
	I1025 08:33:41.444894   12727 cri.go:89] found id: "3976f771a2e1f4333179d9b66178f2b8d0e5df890f3e3934fa6b45cecc30471a"
	I1025 08:33:41.444898   12727 cri.go:89] found id: "c675c035a6dbbaf595ca82e7e5494c65d80846fa4403b093199b14027e8a1667"
	I1025 08:33:41.444901   12727 cri.go:89] found id: "f4608f1e203354a663895813be5940b353d5999288ba102363d0fc2ab5bee267"
	I1025 08:33:41.444904   12727 cri.go:89] found id: "2cb17a3d4c7c6272eb0c5b9a7105721c94d5c62089aae769495e41fd49a9130b"
	I1025 08:33:41.444908   12727 cri.go:89] found id: "9dfa9f05089922f4312c61cbba94162087a4b0ccb02343ec9524bd69b39eec86"
	I1025 08:33:41.444922   12727 cri.go:89] found id: "149fe53d8f125d10a2d1e5e32c97a97f55639e392cf1a6745398ed8ac79e7d66"
	I1025 08:33:41.444929   12727 cri.go:89] found id: "c0dd415aff39e64402a3d64867549a7a3b05dfcf210f7e1f94409f91384fa997"
	I1025 08:33:41.444932   12727 cri.go:89] found id: "3f2799b9c1b41eaec314d09a1269f3e858b9982860af8c4168ed54a9280b011d"
	I1025 08:33:41.444939   12727 cri.go:89] found id: "d658e0bd52e0d32446762f4e3466c7f28b2e31c7a876c4caaddd5444b238f373"
	I1025 08:33:41.444943   12727 cri.go:89] found id: "6abccf54d473ff21bf8b7e7a67510c44ab418b13e2ded840ecd714d35bc33050"
	I1025 08:33:41.444946   12727 cri.go:89] found id: "710588f58555a870efb846b35789c67ba50b0c47c5b86a2c12d2ebe8b7d3cf14"
	I1025 08:33:41.444954   12727 cri.go:89] found id: "990bc617d7987b2c3713ad5fcb4652f9f7697f9f8286262c1d0fbd61f03cb291"
	I1025 08:33:41.444962   12727 cri.go:89] found id: "3efcb5f51b3c46b7526ad1ba08ebb8af07f5ac32cc61ce8aa211066d05af5549"
	I1025 08:33:41.444967   12727 cri.go:89] found id: "6375e783d80c1ab19c196dea15de187c31ecf6231a480e3b354abf63f9986443"
	I1025 08:33:41.444971   12727 cri.go:89] found id: "11da55b7006a0c7ee4fdfa87ea50ea79c464ed85828bc75ef11dbecc5857aaa2"
	I1025 08:33:41.444974   12727 cri.go:89] found id: "a8a4b543d2547f083b802e2c98710bcc2f07776dfc49c8c41bb357abf197d215"
	I1025 08:33:41.444978   12727 cri.go:89] found id: "f40b4040bdb0ca8f6ba539c3a1cc4570ad8fc2d470fe2a3ee7023680adca8179"
	I1025 08:33:41.444981   12727 cri.go:89] found id: "ca4b0c8b5bb6a41514380e8fbc131cfd3c0036a7467eef4ccf7772196d699ebd"
	I1025 08:33:41.444984   12727 cri.go:89] found id: "2105d8a4af178f4195391e5b2d1ca656712cbf36e77394a9382e7c7a4b280b3c"
	I1025 08:33:41.444987   12727 cri.go:89] found id: ""
	I1025 08:33:41.445042   12727 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 08:33:41.459866   12727 out.go:203] 
	W1025 08:33:41.462764   12727 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:33:41Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:33:41Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 08:33:41.462785   12727 out.go:285] * 
	* 
	W1025 08:33:41.466640   12727 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 08:33:41.469537   12727 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-468341 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.26s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.37s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 8.399226ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-rqmn4" [1709dfd1-357e-496a-98b7-205be9cae357] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.009966419s
addons_test.go:463: (dbg) Run:  kubectl --context addons-468341 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-468341 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-468341 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (270.978439ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 08:33:34.994084   12501 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:33:34.994280   12501 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:33:34.994289   12501 out.go:374] Setting ErrFile to fd 2...
	I1025 08:33:34.994295   12501 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:33:34.994563   12501 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-2312/.minikube/bin
	I1025 08:33:34.994884   12501 mustload.go:65] Loading cluster: addons-468341
	I1025 08:33:34.995310   12501 config.go:182] Loaded profile config "addons-468341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:33:34.995327   12501 addons.go:606] checking whether the cluster is paused
	I1025 08:33:34.995437   12501 config.go:182] Loaded profile config "addons-468341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:33:34.995447   12501 host.go:66] Checking if "addons-468341" exists ...
	I1025 08:33:34.995949   12501 cli_runner.go:164] Run: docker container inspect addons-468341 --format={{.State.Status}}
	I1025 08:33:35.018075   12501 ssh_runner.go:195] Run: systemctl --version
	I1025 08:33:35.018139   12501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:33:35.036127   12501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/addons-468341/id_rsa Username:docker}
	I1025 08:33:35.144639   12501 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 08:33:35.144771   12501 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 08:33:35.184632   12501 cri.go:89] found id: "5fd1e4aa2eaec9a625e731b2310d7e53df6715992458f7b34f24c564a0371c93"
	I1025 08:33:35.184666   12501 cri.go:89] found id: "5f5e2ff55f9b9d96f5647e7d06d29ba2d01ac53c5f3bebb14c41ac3b15d46dbe"
	I1025 08:33:35.184672   12501 cri.go:89] found id: "fde287f23459127cc0438aff73855d690ce348105c0984d314227ebb893c0f27"
	I1025 08:33:35.184676   12501 cri.go:89] found id: "1c4a84678a48fe97a0d5e107fffed343bbdb5797c60914bfc1787ef22ec8aab6"
	I1025 08:33:35.184679   12501 cri.go:89] found id: "3976f771a2e1f4333179d9b66178f2b8d0e5df890f3e3934fa6b45cecc30471a"
	I1025 08:33:35.184683   12501 cri.go:89] found id: "c675c035a6dbbaf595ca82e7e5494c65d80846fa4403b093199b14027e8a1667"
	I1025 08:33:35.184687   12501 cri.go:89] found id: "f4608f1e203354a663895813be5940b353d5999288ba102363d0fc2ab5bee267"
	I1025 08:33:35.184690   12501 cri.go:89] found id: "2cb17a3d4c7c6272eb0c5b9a7105721c94d5c62089aae769495e41fd49a9130b"
	I1025 08:33:35.184692   12501 cri.go:89] found id: "9dfa9f05089922f4312c61cbba94162087a4b0ccb02343ec9524bd69b39eec86"
	I1025 08:33:35.184699   12501 cri.go:89] found id: "149fe53d8f125d10a2d1e5e32c97a97f55639e392cf1a6745398ed8ac79e7d66"
	I1025 08:33:35.184703   12501 cri.go:89] found id: "c0dd415aff39e64402a3d64867549a7a3b05dfcf210f7e1f94409f91384fa997"
	I1025 08:33:35.184706   12501 cri.go:89] found id: "3f2799b9c1b41eaec314d09a1269f3e858b9982860af8c4168ed54a9280b011d"
	I1025 08:33:35.184709   12501 cri.go:89] found id: "d658e0bd52e0d32446762f4e3466c7f28b2e31c7a876c4caaddd5444b238f373"
	I1025 08:33:35.184712   12501 cri.go:89] found id: "6abccf54d473ff21bf8b7e7a67510c44ab418b13e2ded840ecd714d35bc33050"
	I1025 08:33:35.184716   12501 cri.go:89] found id: "710588f58555a870efb846b35789c67ba50b0c47c5b86a2c12d2ebe8b7d3cf14"
	I1025 08:33:35.184734   12501 cri.go:89] found id: "990bc617d7987b2c3713ad5fcb4652f9f7697f9f8286262c1d0fbd61f03cb291"
	I1025 08:33:35.184742   12501 cri.go:89] found id: "3efcb5f51b3c46b7526ad1ba08ebb8af07f5ac32cc61ce8aa211066d05af5549"
	I1025 08:33:35.184747   12501 cri.go:89] found id: "6375e783d80c1ab19c196dea15de187c31ecf6231a480e3b354abf63f9986443"
	I1025 08:33:35.184750   12501 cri.go:89] found id: "11da55b7006a0c7ee4fdfa87ea50ea79c464ed85828bc75ef11dbecc5857aaa2"
	I1025 08:33:35.184753   12501 cri.go:89] found id: "a8a4b543d2547f083b802e2c98710bcc2f07776dfc49c8c41bb357abf197d215"
	I1025 08:33:35.184758   12501 cri.go:89] found id: "f40b4040bdb0ca8f6ba539c3a1cc4570ad8fc2d470fe2a3ee7023680adca8179"
	I1025 08:33:35.184762   12501 cri.go:89] found id: "ca4b0c8b5bb6a41514380e8fbc131cfd3c0036a7467eef4ccf7772196d699ebd"
	I1025 08:33:35.184765   12501 cri.go:89] found id: "2105d8a4af178f4195391e5b2d1ca656712cbf36e77394a9382e7c7a4b280b3c"
	I1025 08:33:35.184769   12501 cri.go:89] found id: ""
	I1025 08:33:35.184829   12501 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 08:33:35.200596   12501 out.go:203] 
	W1025 08:33:35.203631   12501 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:33:35Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:33:35Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 08:33:35.203649   12501 out.go:285] * 
	* 
	W1025 08:33:35.207534   12501 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 08:33:35.210580   12501 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-468341 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.37s)

                                                
                                    
x
+
TestAddons/parallel/CSI (45.55s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1025 08:33:27.159319    4110 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1025 08:33:27.164316    4110 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1025 08:33:27.164342    4110 kapi.go:107] duration metric: took 5.030782ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 5.041794ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-468341 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468341 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468341 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468341 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468341 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468341 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468341 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468341 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468341 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468341 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-468341 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [c4b44f11-76a9-42b0-a05c-a4202930b500] Pending
helpers_test.go:352: "task-pv-pod" [c4b44f11-76a9-42b0-a05c-a4202930b500] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [c4b44f11-76a9-42b0-a05c-a4202930b500] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.004086788s
addons_test.go:572: (dbg) Run:  kubectl --context addons-468341 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-468341 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-468341 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-468341 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-468341 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-468341 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468341 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468341 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468341 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468341 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468341 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468341 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468341 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468341 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468341 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468341 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468341 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468341 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468341 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468341 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468341 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468341 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-468341 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [4081f0b6-60d2-4f7e-94f4-7ba4aebca472] Pending
helpers_test.go:352: "task-pv-pod-restore" [4081f0b6-60d2-4f7e-94f4-7ba4aebca472] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [4081f0b6-60d2-4f7e-94f4-7ba4aebca472] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003847972s
addons_test.go:614: (dbg) Run:  kubectl --context addons-468341 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-468341 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-468341 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-468341 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-468341 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (377.524531ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 08:34:12.108858   13507 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:34:12.109400   13507 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:34:12.109440   13507 out.go:374] Setting ErrFile to fd 2...
	I1025 08:34:12.109462   13507 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:34:12.109752   13507 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-2312/.minikube/bin
	I1025 08:34:12.110117   13507 mustload.go:65] Loading cluster: addons-468341
	I1025 08:34:12.110549   13507 config.go:182] Loaded profile config "addons-468341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:34:12.110591   13507 addons.go:606] checking whether the cluster is paused
	I1025 08:34:12.110727   13507 config.go:182] Loaded profile config "addons-468341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:34:12.110755   13507 host.go:66] Checking if "addons-468341" exists ...
	I1025 08:34:12.111236   13507 cli_runner.go:164] Run: docker container inspect addons-468341 --format={{.State.Status}}
	I1025 08:34:12.130344   13507 ssh_runner.go:195] Run: systemctl --version
	I1025 08:34:12.130397   13507 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:34:12.148369   13507 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/addons-468341/id_rsa Username:docker}
	I1025 08:34:12.257058   13507 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 08:34:12.257146   13507 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 08:34:12.294929   13507 cri.go:89] found id: "5fd1e4aa2eaec9a625e731b2310d7e53df6715992458f7b34f24c564a0371c93"
	I1025 08:34:12.294947   13507 cri.go:89] found id: "5f5e2ff55f9b9d96f5647e7d06d29ba2d01ac53c5f3bebb14c41ac3b15d46dbe"
	I1025 08:34:12.294952   13507 cri.go:89] found id: "fde287f23459127cc0438aff73855d690ce348105c0984d314227ebb893c0f27"
	I1025 08:34:12.294956   13507 cri.go:89] found id: "1c4a84678a48fe97a0d5e107fffed343bbdb5797c60914bfc1787ef22ec8aab6"
	I1025 08:34:12.294959   13507 cri.go:89] found id: "3976f771a2e1f4333179d9b66178f2b8d0e5df890f3e3934fa6b45cecc30471a"
	I1025 08:34:12.294963   13507 cri.go:89] found id: "c675c035a6dbbaf595ca82e7e5494c65d80846fa4403b093199b14027e8a1667"
	I1025 08:34:12.294966   13507 cri.go:89] found id: "f4608f1e203354a663895813be5940b353d5999288ba102363d0fc2ab5bee267"
	I1025 08:34:12.294969   13507 cri.go:89] found id: "2cb17a3d4c7c6272eb0c5b9a7105721c94d5c62089aae769495e41fd49a9130b"
	I1025 08:34:12.294973   13507 cri.go:89] found id: "9dfa9f05089922f4312c61cbba94162087a4b0ccb02343ec9524bd69b39eec86"
	I1025 08:34:12.294980   13507 cri.go:89] found id: "149fe53d8f125d10a2d1e5e32c97a97f55639e392cf1a6745398ed8ac79e7d66"
	I1025 08:34:12.294983   13507 cri.go:89] found id: "c0dd415aff39e64402a3d64867549a7a3b05dfcf210f7e1f94409f91384fa997"
	I1025 08:34:12.294986   13507 cri.go:89] found id: "3f2799b9c1b41eaec314d09a1269f3e858b9982860af8c4168ed54a9280b011d"
	I1025 08:34:12.294989   13507 cri.go:89] found id: "d658e0bd52e0d32446762f4e3466c7f28b2e31c7a876c4caaddd5444b238f373"
	I1025 08:34:12.294992   13507 cri.go:89] found id: "6abccf54d473ff21bf8b7e7a67510c44ab418b13e2ded840ecd714d35bc33050"
	I1025 08:34:12.294995   13507 cri.go:89] found id: "710588f58555a870efb846b35789c67ba50b0c47c5b86a2c12d2ebe8b7d3cf14"
	I1025 08:34:12.295003   13507 cri.go:89] found id: "990bc617d7987b2c3713ad5fcb4652f9f7697f9f8286262c1d0fbd61f03cb291"
	I1025 08:34:12.295007   13507 cri.go:89] found id: "3efcb5f51b3c46b7526ad1ba08ebb8af07f5ac32cc61ce8aa211066d05af5549"
	I1025 08:34:12.295012   13507 cri.go:89] found id: "6375e783d80c1ab19c196dea15de187c31ecf6231a480e3b354abf63f9986443"
	I1025 08:34:12.295015   13507 cri.go:89] found id: "11da55b7006a0c7ee4fdfa87ea50ea79c464ed85828bc75ef11dbecc5857aaa2"
	I1025 08:34:12.295018   13507 cri.go:89] found id: "a8a4b543d2547f083b802e2c98710bcc2f07776dfc49c8c41bb357abf197d215"
	I1025 08:34:12.295023   13507 cri.go:89] found id: "f40b4040bdb0ca8f6ba539c3a1cc4570ad8fc2d470fe2a3ee7023680adca8179"
	I1025 08:34:12.295026   13507 cri.go:89] found id: "ca4b0c8b5bb6a41514380e8fbc131cfd3c0036a7467eef4ccf7772196d699ebd"
	I1025 08:34:12.295029   13507 cri.go:89] found id: "2105d8a4af178f4195391e5b2d1ca656712cbf36e77394a9382e7c7a4b280b3c"
	I1025 08:34:12.295032   13507 cri.go:89] found id: ""
	I1025 08:34:12.295083   13507 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 08:34:12.318185   13507 out.go:203] 
	W1025 08:34:12.323046   13507 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:34:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:34:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 08:34:12.323068   13507 out.go:285] * 
	* 
	W1025 08:34:12.432004   13507 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 08:34:12.435092   13507 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-468341 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-468341 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-468341 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (266.951915ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 08:34:12.492409   13548 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:34:12.492572   13548 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:34:12.492584   13548 out.go:374] Setting ErrFile to fd 2...
	I1025 08:34:12.492590   13548 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:34:12.492846   13548 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-2312/.minikube/bin
	I1025 08:34:12.493131   13548 mustload.go:65] Loading cluster: addons-468341
	I1025 08:34:12.493490   13548 config.go:182] Loaded profile config "addons-468341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:34:12.493507   13548 addons.go:606] checking whether the cluster is paused
	I1025 08:34:12.493606   13548 config.go:182] Loaded profile config "addons-468341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:34:12.493622   13548 host.go:66] Checking if "addons-468341" exists ...
	I1025 08:34:12.494107   13548 cli_runner.go:164] Run: docker container inspect addons-468341 --format={{.State.Status}}
	I1025 08:34:12.511520   13548 ssh_runner.go:195] Run: systemctl --version
	I1025 08:34:12.511576   13548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:34:12.531791   13548 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/addons-468341/id_rsa Username:docker}
	I1025 08:34:12.641425   13548 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 08:34:12.641518   13548 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 08:34:12.677219   13548 cri.go:89] found id: "5fd1e4aa2eaec9a625e731b2310d7e53df6715992458f7b34f24c564a0371c93"
	I1025 08:34:12.677299   13548 cri.go:89] found id: "5f5e2ff55f9b9d96f5647e7d06d29ba2d01ac53c5f3bebb14c41ac3b15d46dbe"
	I1025 08:34:12.677320   13548 cri.go:89] found id: "fde287f23459127cc0438aff73855d690ce348105c0984d314227ebb893c0f27"
	I1025 08:34:12.677344   13548 cri.go:89] found id: "1c4a84678a48fe97a0d5e107fffed343bbdb5797c60914bfc1787ef22ec8aab6"
	I1025 08:34:12.677379   13548 cri.go:89] found id: "3976f771a2e1f4333179d9b66178f2b8d0e5df890f3e3934fa6b45cecc30471a"
	I1025 08:34:12.677406   13548 cri.go:89] found id: "c675c035a6dbbaf595ca82e7e5494c65d80846fa4403b093199b14027e8a1667"
	I1025 08:34:12.677429   13548 cri.go:89] found id: "f4608f1e203354a663895813be5940b353d5999288ba102363d0fc2ab5bee267"
	I1025 08:34:12.677461   13548 cri.go:89] found id: "2cb17a3d4c7c6272eb0c5b9a7105721c94d5c62089aae769495e41fd49a9130b"
	I1025 08:34:12.677480   13548 cri.go:89] found id: "9dfa9f05089922f4312c61cbba94162087a4b0ccb02343ec9524bd69b39eec86"
	I1025 08:34:12.677513   13548 cri.go:89] found id: "149fe53d8f125d10a2d1e5e32c97a97f55639e392cf1a6745398ed8ac79e7d66"
	I1025 08:34:12.677546   13548 cri.go:89] found id: "c0dd415aff39e64402a3d64867549a7a3b05dfcf210f7e1f94409f91384fa997"
	I1025 08:34:12.677565   13548 cri.go:89] found id: "3f2799b9c1b41eaec314d09a1269f3e858b9982860af8c4168ed54a9280b011d"
	I1025 08:34:12.677586   13548 cri.go:89] found id: "d658e0bd52e0d32446762f4e3466c7f28b2e31c7a876c4caaddd5444b238f373"
	I1025 08:34:12.677625   13548 cri.go:89] found id: "6abccf54d473ff21bf8b7e7a67510c44ab418b13e2ded840ecd714d35bc33050"
	I1025 08:34:12.677645   13548 cri.go:89] found id: "710588f58555a870efb846b35789c67ba50b0c47c5b86a2c12d2ebe8b7d3cf14"
	I1025 08:34:12.677669   13548 cri.go:89] found id: "990bc617d7987b2c3713ad5fcb4652f9f7697f9f8286262c1d0fbd61f03cb291"
	I1025 08:34:12.677712   13548 cri.go:89] found id: "3efcb5f51b3c46b7526ad1ba08ebb8af07f5ac32cc61ce8aa211066d05af5549"
	I1025 08:34:12.677733   13548 cri.go:89] found id: "6375e783d80c1ab19c196dea15de187c31ecf6231a480e3b354abf63f9986443"
	I1025 08:34:12.677753   13548 cri.go:89] found id: "11da55b7006a0c7ee4fdfa87ea50ea79c464ed85828bc75ef11dbecc5857aaa2"
	I1025 08:34:12.677775   13548 cri.go:89] found id: "a8a4b543d2547f083b802e2c98710bcc2f07776dfc49c8c41bb357abf197d215"
	I1025 08:34:12.677816   13548 cri.go:89] found id: "f40b4040bdb0ca8f6ba539c3a1cc4570ad8fc2d470fe2a3ee7023680adca8179"
	I1025 08:34:12.677836   13548 cri.go:89] found id: "ca4b0c8b5bb6a41514380e8fbc131cfd3c0036a7467eef4ccf7772196d699ebd"
	I1025 08:34:12.677866   13548 cri.go:89] found id: "2105d8a4af178f4195391e5b2d1ca656712cbf36e77394a9382e7c7a4b280b3c"
	I1025 08:34:12.677883   13548 cri.go:89] found id: ""
	I1025 08:34:12.678009   13548 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 08:34:12.693215   13548 out.go:203] 
	W1025 08:34:12.695981   13548 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:34:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:34:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 08:34:12.696003   13548 out.go:285] * 
	* 
	W1025 08:34:12.699873   13548 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 08:34:12.702730   13548 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-468341 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (45.55s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.54s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-468341 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-468341 --alsologtostderr -v=1: exit status 11 (325.458893ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 08:33:26.348643   11824 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:33:26.348850   11824 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:33:26.348864   11824 out.go:374] Setting ErrFile to fd 2...
	I1025 08:33:26.348869   11824 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:33:26.349159   11824 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-2312/.minikube/bin
	I1025 08:33:26.349486   11824 mustload.go:65] Loading cluster: addons-468341
	I1025 08:33:26.349902   11824 config.go:182] Loaded profile config "addons-468341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:33:26.349924   11824 addons.go:606] checking whether the cluster is paused
	I1025 08:33:26.350088   11824 config.go:182] Loaded profile config "addons-468341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:33:26.350107   11824 host.go:66] Checking if "addons-468341" exists ...
	I1025 08:33:26.350577   11824 cli_runner.go:164] Run: docker container inspect addons-468341 --format={{.State.Status}}
	I1025 08:33:26.367088   11824 ssh_runner.go:195] Run: systemctl --version
	I1025 08:33:26.367142   11824 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:33:26.393187   11824 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/addons-468341/id_rsa Username:docker}
	I1025 08:33:26.524379   11824 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 08:33:26.524491   11824 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 08:33:26.590329   11824 cri.go:89] found id: "5fd1e4aa2eaec9a625e731b2310d7e53df6715992458f7b34f24c564a0371c93"
	I1025 08:33:26.590348   11824 cri.go:89] found id: "5f5e2ff55f9b9d96f5647e7d06d29ba2d01ac53c5f3bebb14c41ac3b15d46dbe"
	I1025 08:33:26.590352   11824 cri.go:89] found id: "fde287f23459127cc0438aff73855d690ce348105c0984d314227ebb893c0f27"
	I1025 08:33:26.590356   11824 cri.go:89] found id: "1c4a84678a48fe97a0d5e107fffed343bbdb5797c60914bfc1787ef22ec8aab6"
	I1025 08:33:26.590359   11824 cri.go:89] found id: "3976f771a2e1f4333179d9b66178f2b8d0e5df890f3e3934fa6b45cecc30471a"
	I1025 08:33:26.590363   11824 cri.go:89] found id: "c675c035a6dbbaf595ca82e7e5494c65d80846fa4403b093199b14027e8a1667"
	I1025 08:33:26.590367   11824 cri.go:89] found id: "f4608f1e203354a663895813be5940b353d5999288ba102363d0fc2ab5bee267"
	I1025 08:33:26.590370   11824 cri.go:89] found id: "2cb17a3d4c7c6272eb0c5b9a7105721c94d5c62089aae769495e41fd49a9130b"
	I1025 08:33:26.590373   11824 cri.go:89] found id: "9dfa9f05089922f4312c61cbba94162087a4b0ccb02343ec9524bd69b39eec86"
	I1025 08:33:26.590380   11824 cri.go:89] found id: "149fe53d8f125d10a2d1e5e32c97a97f55639e392cf1a6745398ed8ac79e7d66"
	I1025 08:33:26.590384   11824 cri.go:89] found id: "c0dd415aff39e64402a3d64867549a7a3b05dfcf210f7e1f94409f91384fa997"
	I1025 08:33:26.590387   11824 cri.go:89] found id: "3f2799b9c1b41eaec314d09a1269f3e858b9982860af8c4168ed54a9280b011d"
	I1025 08:33:26.590390   11824 cri.go:89] found id: "d658e0bd52e0d32446762f4e3466c7f28b2e31c7a876c4caaddd5444b238f373"
	I1025 08:33:26.590392   11824 cri.go:89] found id: "6abccf54d473ff21bf8b7e7a67510c44ab418b13e2ded840ecd714d35bc33050"
	I1025 08:33:26.590396   11824 cri.go:89] found id: "710588f58555a870efb846b35789c67ba50b0c47c5b86a2c12d2ebe8b7d3cf14"
	I1025 08:33:26.590406   11824 cri.go:89] found id: "990bc617d7987b2c3713ad5fcb4652f9f7697f9f8286262c1d0fbd61f03cb291"
	I1025 08:33:26.590409   11824 cri.go:89] found id: "3efcb5f51b3c46b7526ad1ba08ebb8af07f5ac32cc61ce8aa211066d05af5549"
	I1025 08:33:26.590414   11824 cri.go:89] found id: "6375e783d80c1ab19c196dea15de187c31ecf6231a480e3b354abf63f9986443"
	I1025 08:33:26.590417   11824 cri.go:89] found id: "11da55b7006a0c7ee4fdfa87ea50ea79c464ed85828bc75ef11dbecc5857aaa2"
	I1025 08:33:26.590420   11824 cri.go:89] found id: "a8a4b543d2547f083b802e2c98710bcc2f07776dfc49c8c41bb357abf197d215"
	I1025 08:33:26.590424   11824 cri.go:89] found id: "f40b4040bdb0ca8f6ba539c3a1cc4570ad8fc2d470fe2a3ee7023680adca8179"
	I1025 08:33:26.590427   11824 cri.go:89] found id: "ca4b0c8b5bb6a41514380e8fbc131cfd3c0036a7467eef4ccf7772196d699ebd"
	I1025 08:33:26.590430   11824 cri.go:89] found id: "2105d8a4af178f4195391e5b2d1ca656712cbf36e77394a9382e7c7a4b280b3c"
	I1025 08:33:26.590433   11824 cri.go:89] found id: ""
	I1025 08:33:26.590486   11824 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 08:33:26.610540   11824 out.go:203] 
	W1025 08:33:26.613404   11824 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:33:26Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:33:26Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 08:33:26.613433   11824 out.go:285] * 
	* 
	W1025 08:33:26.617181   11824 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 08:33:26.620048   11824 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-468341 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-468341
helpers_test.go:243: (dbg) docker inspect addons-468341:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "921bcbb16e37f18586dfcad891fd4e6c424ad8acf17dde69218b365d67adf111",
	        "Created": "2025-10-25T08:30:22.932850145Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 5281,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T08:30:23.005048345Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/921bcbb16e37f18586dfcad891fd4e6c424ad8acf17dde69218b365d67adf111/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/921bcbb16e37f18586dfcad891fd4e6c424ad8acf17dde69218b365d67adf111/hostname",
	        "HostsPath": "/var/lib/docker/containers/921bcbb16e37f18586dfcad891fd4e6c424ad8acf17dde69218b365d67adf111/hosts",
	        "LogPath": "/var/lib/docker/containers/921bcbb16e37f18586dfcad891fd4e6c424ad8acf17dde69218b365d67adf111/921bcbb16e37f18586dfcad891fd4e6c424ad8acf17dde69218b365d67adf111-json.log",
	        "Name": "/addons-468341",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-468341:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-468341",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "921bcbb16e37f18586dfcad891fd4e6c424ad8acf17dde69218b365d67adf111",
	                "LowerDir": "/var/lib/docker/overlay2/658dff37d510687d7ea850578e6efc1df446bb050fd0131ea19f38935eea4f9e-init/diff:/var/lib/docker/overlay2/cef79fc16ca3e257f9c9390d34fae091a7f7fa219913b2606caf1922ea50ed93/diff",
	                "MergedDir": "/var/lib/docker/overlay2/658dff37d510687d7ea850578e6efc1df446bb050fd0131ea19f38935eea4f9e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/658dff37d510687d7ea850578e6efc1df446bb050fd0131ea19f38935eea4f9e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/658dff37d510687d7ea850578e6efc1df446bb050fd0131ea19f38935eea4f9e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-468341",
	                "Source": "/var/lib/docker/volumes/addons-468341/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-468341",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-468341",
	                "name.minikube.sigs.k8s.io": "addons-468341",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "953f4c687d4c2f5a2e91e34ee118fa4aa98ea2440602b2c4cba8d007779b5b17",
	            "SandboxKey": "/var/run/docker/netns/953f4c687d4c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-468341": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3e:32:ff:26:0c:5d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "422586f035103bb55aaa9f0e31f2b43fa36e4fbc27c8f4a97382f7de2b9ed97e",
	                    "EndpointID": "cfb1c4d45c6fef1387335910a8ca12188b4265998255f0ff9cd8603646ab1513",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-468341",
	                        "921bcbb16e37"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-468341 -n addons-468341
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-468341 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-468341 logs -n 25: (1.688966752s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-598496 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-598496   │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │ 25 Oct 25 08:29 UTC │
	│ delete  │ -p download-only-598496                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-598496   │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │ 25 Oct 25 08:29 UTC │
	│ start   │ -o=json --download-only -p download-only-661693 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-661693   │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │ 25 Oct 25 08:29 UTC │
	│ delete  │ -p download-only-661693                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-661693   │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │ 25 Oct 25 08:29 UTC │
	│ delete  │ -p download-only-598496                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-598496   │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │ 25 Oct 25 08:29 UTC │
	│ delete  │ -p download-only-661693                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-661693   │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │ 25 Oct 25 08:29 UTC │
	│ start   │ --download-only -p download-docker-812739 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-812739 │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │                     │
	│ delete  │ -p download-docker-812739                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-812739 │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │ 25 Oct 25 08:29 UTC │
	│ start   │ --download-only -p binary-mirror-223268 --alsologtostderr --binary-mirror http://127.0.0.1:41137 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-223268   │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │                     │
	│ delete  │ -p binary-mirror-223268                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-223268   │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │ 25 Oct 25 08:29 UTC │
	│ addons  │ enable dashboard -p addons-468341                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-468341          │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │                     │
	│ addons  │ disable dashboard -p addons-468341                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-468341          │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │                     │
	│ start   │ -p addons-468341 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-468341          │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │ 25 Oct 25 08:32 UTC │
	│ addons  │ addons-468341 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-468341          │ jenkins │ v1.37.0 │ 25 Oct 25 08:32 UTC │                     │
	│ addons  │ addons-468341 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-468341          │ jenkins │ v1.37.0 │ 25 Oct 25 08:33 UTC │                     │
	│ addons  │ addons-468341 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-468341          │ jenkins │ v1.37.0 │ 25 Oct 25 08:33 UTC │                     │
	│ addons  │ addons-468341 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-468341          │ jenkins │ v1.37.0 │ 25 Oct 25 08:33 UTC │                     │
	│ ip      │ addons-468341 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-468341          │ jenkins │ v1.37.0 │ 25 Oct 25 08:33 UTC │ 25 Oct 25 08:33 UTC │
	│ addons  │ addons-468341 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-468341          │ jenkins │ v1.37.0 │ 25 Oct 25 08:33 UTC │                     │
	│ addons  │ addons-468341 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-468341          │ jenkins │ v1.37.0 │ 25 Oct 25 08:33 UTC │                     │
	│ ssh     │ addons-468341 ssh cat /opt/local-path-provisioner/pvc-e010f192-5941-4327-9df8-ac1fe331714f_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-468341          │ jenkins │ v1.37.0 │ 25 Oct 25 08:33 UTC │ 25 Oct 25 08:33 UTC │
	│ addons  │ enable headlamp -p addons-468341 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-468341          │ jenkins │ v1.37.0 │ 25 Oct 25 08:33 UTC │                     │
	│ addons  │ addons-468341 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-468341          │ jenkins │ v1.37.0 │ 25 Oct 25 08:33 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 08:29:55
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 08:29:55.091280    4872 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:29:55.091482    4872 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:29:55.091493    4872 out.go:374] Setting ErrFile to fd 2...
	I1025 08:29:55.091498    4872 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:29:55.091795    4872 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-2312/.minikube/bin
	I1025 08:29:55.092294    4872 out.go:368] Setting JSON to false
	I1025 08:29:55.093073    4872 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":746,"bootTime":1761380249,"procs":143,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1025 08:29:55.093145    4872 start.go:141] virtualization:  
	I1025 08:29:55.114103    4872 out.go:179] * [addons-468341] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 08:29:55.144478    4872 out.go:179]   - MINIKUBE_LOCATION=21796
	I1025 08:29:55.144585    4872 notify.go:220] Checking for updates...
	I1025 08:29:55.203792    4872 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 08:29:55.219806    4872 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21796-2312/kubeconfig
	I1025 08:29:55.242998    4872 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-2312/.minikube
	I1025 08:29:55.273176    4872 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 08:29:55.296812    4872 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 08:29:55.321978    4872 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 08:29:55.341948    4872 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 08:29:55.342109    4872 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 08:29:55.408178    4872 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-25 08:29:55.398857938 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 08:29:55.408290    4872 docker.go:318] overlay module found
	I1025 08:29:55.430026    4872 out.go:179] * Using the docker driver based on user configuration
	I1025 08:29:55.446624    4872 start.go:305] selected driver: docker
	I1025 08:29:55.446649    4872 start.go:925] validating driver "docker" against <nil>
	I1025 08:29:55.446662    4872 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 08:29:55.447377    4872 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 08:29:55.510286    4872 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-25 08:29:55.499127841 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 08:29:55.510449    4872 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 08:29:55.510696    4872 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 08:29:55.519365    4872 out.go:179] * Using Docker driver with root privileges
	I1025 08:29:55.526147    4872 cni.go:84] Creating CNI manager for ""
	I1025 08:29:55.526221    4872 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 08:29:55.526229    4872 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 08:29:55.526312    4872 start.go:349] cluster config:
	{Name:addons-468341 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-468341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1025 08:29:55.536374    4872 out.go:179] * Starting "addons-468341" primary control-plane node in "addons-468341" cluster
	I1025 08:29:55.543836    4872 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 08:29:55.549422    4872 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 08:29:55.557089    4872 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 08:29:55.557147    4872 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21796-2312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1025 08:29:55.557157    4872 cache.go:58] Caching tarball of preloaded images
	I1025 08:29:55.557219    4872 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 08:29:55.557511    4872 preload.go:233] Found /home/jenkins/minikube-integration/21796-2312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 08:29:55.557527    4872 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 08:29:55.557868    4872 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/config.json ...
	I1025 08:29:55.557889    4872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/config.json: {Name:mka19d1f2dad675e22268b31a3d755a4a49d3897 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:29:55.574959    4872 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1025 08:29:55.575093    4872 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1025 08:29:55.575125    4872 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1025 08:29:55.575135    4872 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1025 08:29:55.575149    4872 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1025 08:29:55.575155    4872 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from local cache
	I1025 08:30:14.081320    4872 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from cached tarball
	I1025 08:30:14.081353    4872 cache.go:232] Successfully downloaded all kic artifacts
	I1025 08:30:14.081382    4872 start.go:360] acquireMachinesLock for addons-468341: {Name:mkc686fd048fc7820c5fe7ce0d23697ebcad8b28 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 08:30:14.081499    4872 start.go:364] duration metric: took 99.438µs to acquireMachinesLock for "addons-468341"
	I1025 08:30:14.081524    4872 start.go:93] Provisioning new machine with config: &{Name:addons-468341 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-468341 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 08:30:14.081613    4872 start.go:125] createHost starting for "" (driver="docker")
	I1025 08:30:14.085124    4872 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1025 08:30:14.085365    4872 start.go:159] libmachine.API.Create for "addons-468341" (driver="docker")
	I1025 08:30:14.085414    4872 client.go:168] LocalClient.Create starting
	I1025 08:30:14.085547    4872 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem
	I1025 08:30:14.584443    4872 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem
	I1025 08:30:15.775099    4872 cli_runner.go:164] Run: docker network inspect addons-468341 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 08:30:15.791782    4872 cli_runner.go:211] docker network inspect addons-468341 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 08:30:15.791882    4872 network_create.go:284] running [docker network inspect addons-468341] to gather additional debugging logs...
	I1025 08:30:15.791905    4872 cli_runner.go:164] Run: docker network inspect addons-468341
	W1025 08:30:15.807823    4872 cli_runner.go:211] docker network inspect addons-468341 returned with exit code 1
	I1025 08:30:15.807855    4872 network_create.go:287] error running [docker network inspect addons-468341]: docker network inspect addons-468341: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-468341 not found
	I1025 08:30:15.807870    4872 network_create.go:289] output of [docker network inspect addons-468341]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-468341 not found
	
	** /stderr **
	I1025 08:30:15.807981    4872 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 08:30:15.825480    4872 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001ac3490}
	I1025 08:30:15.825520    4872 network_create.go:124] attempt to create docker network addons-468341 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1025 08:30:15.825576    4872 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-468341 addons-468341
	I1025 08:30:15.890317    4872 network_create.go:108] docker network addons-468341 192.168.49.0/24 created
	I1025 08:30:15.890350    4872 kic.go:121] calculated static IP "192.168.49.2" for the "addons-468341" container
	I1025 08:30:15.890423    4872 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 08:30:15.906654    4872 cli_runner.go:164] Run: docker volume create addons-468341 --label name.minikube.sigs.k8s.io=addons-468341 --label created_by.minikube.sigs.k8s.io=true
	I1025 08:30:15.924069    4872 oci.go:103] Successfully created a docker volume addons-468341
	I1025 08:30:15.924153    4872 cli_runner.go:164] Run: docker run --rm --name addons-468341-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-468341 --entrypoint /usr/bin/test -v addons-468341:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1025 08:30:18.415356    4872 cli_runner.go:217] Completed: docker run --rm --name addons-468341-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-468341 --entrypoint /usr/bin/test -v addons-468341:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (2.491163077s)
	I1025 08:30:18.415399    4872 oci.go:107] Successfully prepared a docker volume addons-468341
	I1025 08:30:18.415428    4872 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 08:30:18.415447    4872 kic.go:194] Starting extracting preloaded images to volume ...
	I1025 08:30:18.415522    4872 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21796-2312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-468341:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1025 08:30:22.860306    4872 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21796-2312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-468341:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.444745148s)
	I1025 08:30:22.860341    4872 kic.go:203] duration metric: took 4.444890249s to extract preloaded images to volume ...
	W1025 08:30:22.860496    4872 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1025 08:30:22.860611    4872 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 08:30:22.917078    4872 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-468341 --name addons-468341 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-468341 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-468341 --network addons-468341 --ip 192.168.49.2 --volume addons-468341:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1025 08:30:23.242169    4872 cli_runner.go:164] Run: docker container inspect addons-468341 --format={{.State.Running}}
	I1025 08:30:23.269927    4872 cli_runner.go:164] Run: docker container inspect addons-468341 --format={{.State.Status}}
	I1025 08:30:23.291382    4872 cli_runner.go:164] Run: docker exec addons-468341 stat /var/lib/dpkg/alternatives/iptables
	I1025 08:30:23.344104    4872 oci.go:144] the created container "addons-468341" has a running status.
	I1025 08:30:23.344130    4872 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21796-2312/.minikube/machines/addons-468341/id_rsa...
	I1025 08:30:23.401331    4872 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21796-2312/.minikube/machines/addons-468341/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 08:30:23.421304    4872 cli_runner.go:164] Run: docker container inspect addons-468341 --format={{.State.Status}}
	I1025 08:30:23.440122    4872 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 08:30:23.440145    4872 kic_runner.go:114] Args: [docker exec --privileged addons-468341 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 08:30:23.493304    4872 cli_runner.go:164] Run: docker container inspect addons-468341 --format={{.State.Status}}
	I1025 08:30:23.517428    4872 machine.go:93] provisionDockerMachine start ...
	I1025 08:30:23.517537    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:30:23.542649    4872 main.go:141] libmachine: Using SSH client type: native
	I1025 08:30:23.542991    4872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1025 08:30:23.543014    4872 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 08:30:23.543781    4872 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1025 08:30:26.697631    4872 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-468341
	
	I1025 08:30:26.697654    4872 ubuntu.go:182] provisioning hostname "addons-468341"
	I1025 08:30:26.697746    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:30:26.716111    4872 main.go:141] libmachine: Using SSH client type: native
	I1025 08:30:26.716435    4872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1025 08:30:26.716452    4872 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-468341 && echo "addons-468341" | sudo tee /etc/hostname
	I1025 08:30:26.875142    4872 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-468341
	
	I1025 08:30:26.875260    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:30:26.893093    4872 main.go:141] libmachine: Using SSH client type: native
	I1025 08:30:26.893388    4872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1025 08:30:26.893409    4872 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-468341' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-468341/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-468341' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 08:30:27.040683    4872 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 08:30:27.040805    4872 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21796-2312/.minikube CaCertPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21796-2312/.minikube}
	I1025 08:30:27.040844    4872 ubuntu.go:190] setting up certificates
	I1025 08:30:27.040876    4872 provision.go:84] configureAuth start
	I1025 08:30:27.040951    4872 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-468341
	I1025 08:30:27.058326    4872 provision.go:143] copyHostCerts
	I1025 08:30:27.058405    4872 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21796-2312/.minikube/ca.pem (1082 bytes)
	I1025 08:30:27.058652    4872 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21796-2312/.minikube/cert.pem (1123 bytes)
	I1025 08:30:27.058735    4872 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21796-2312/.minikube/key.pem (1679 bytes)
	I1025 08:30:27.058819    4872 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21796-2312/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca-key.pem org=jenkins.addons-468341 san=[127.0.0.1 192.168.49.2 addons-468341 localhost minikube]
	I1025 08:30:27.521500    4872 provision.go:177] copyRemoteCerts
	I1025 08:30:27.521567    4872 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 08:30:27.521607    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:30:27.538674    4872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/addons-468341/id_rsa Username:docker}
	I1025 08:30:27.641479    4872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 08:30:27.658278    4872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1025 08:30:27.675721    4872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 08:30:27.692911    4872 provision.go:87] duration metric: took 652.021041ms to configureAuth
	I1025 08:30:27.692937    4872 ubuntu.go:206] setting minikube options for container-runtime
	I1025 08:30:27.693133    4872 config.go:182] Loaded profile config "addons-468341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:30:27.693247    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:30:27.710124    4872 main.go:141] libmachine: Using SSH client type: native
	I1025 08:30:27.710437    4872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1025 08:30:27.710460    4872 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 08:30:27.967624    4872 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 08:30:27.967697    4872 machine.go:96] duration metric: took 4.45024481s to provisionDockerMachine
	I1025 08:30:27.967721    4872 client.go:171] duration metric: took 13.882296562s to LocalClient.Create
	I1025 08:30:27.967774    4872 start.go:167] duration metric: took 13.882391643s to libmachine.API.Create "addons-468341"
	I1025 08:30:27.967800    4872 start.go:293] postStartSetup for "addons-468341" (driver="docker")
	I1025 08:30:27.967827    4872 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 08:30:27.967938    4872 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 08:30:27.968046    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:30:27.986079    4872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/addons-468341/id_rsa Username:docker}
	I1025 08:30:28.098567    4872 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 08:30:28.102127    4872 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 08:30:28.102156    4872 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 08:30:28.102168    4872 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-2312/.minikube/addons for local assets ...
	I1025 08:30:28.102238    4872 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-2312/.minikube/files for local assets ...
	I1025 08:30:28.102267    4872 start.go:296] duration metric: took 134.446525ms for postStartSetup
	I1025 08:30:28.102619    4872 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-468341
	I1025 08:30:28.119663    4872 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/config.json ...
	I1025 08:30:28.119960    4872 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 08:30:28.120009    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:30:28.139624    4872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/addons-468341/id_rsa Username:docker}
	I1025 08:30:28.239228    4872 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 08:30:28.244379    4872 start.go:128] duration metric: took 14.162752491s to createHost
	I1025 08:30:28.244407    4872 start.go:83] releasing machines lock for "addons-468341", held for 14.162898666s
	I1025 08:30:28.244480    4872 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-468341
	I1025 08:30:28.261779    4872 ssh_runner.go:195] Run: cat /version.json
	I1025 08:30:28.261839    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:30:28.262184    4872 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 08:30:28.262262    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:30:28.286244    4872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/addons-468341/id_rsa Username:docker}
	I1025 08:30:28.299617    4872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/addons-468341/id_rsa Username:docker}
	I1025 08:30:28.389715    4872 ssh_runner.go:195] Run: systemctl --version
	I1025 08:30:28.483586    4872 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 08:30:28.517790    4872 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 08:30:28.521882    4872 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 08:30:28.522020    4872 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 08:30:28.551384    4872 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1025 08:30:28.551447    4872 start.go:495] detecting cgroup driver to use...
	I1025 08:30:28.551494    4872 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 08:30:28.551549    4872 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 08:30:28.567921    4872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 08:30:28.580432    4872 docker.go:218] disabling cri-docker service (if available) ...
	I1025 08:30:28.580550    4872 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 08:30:28.598177    4872 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 08:30:28.617273    4872 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 08:30:28.747479    4872 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 08:30:28.887848    4872 docker.go:234] disabling docker service ...
	I1025 08:30:28.887975    4872 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 08:30:28.908816    4872 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 08:30:28.922073    4872 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 08:30:29.046572    4872 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 08:30:29.172480    4872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 08:30:29.186444    4872 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 08:30:29.200002    4872 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 08:30:29.200067    4872 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 08:30:29.208662    4872 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 08:30:29.208727    4872 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 08:30:29.217689    4872 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 08:30:29.226728    4872 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 08:30:29.235304    4872 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 08:30:29.243236    4872 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 08:30:29.251931    4872 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 08:30:29.265693    4872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 08:30:29.274388    4872 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 08:30:29.281637    4872 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1025 08:30:29.281744    4872 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1025 08:30:29.295552    4872 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 08:30:29.303070    4872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 08:30:29.426850    4872 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 08:30:29.549011    4872 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 08:30:29.549093    4872 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 08:30:29.552826    4872 start.go:563] Will wait 60s for crictl version
	I1025 08:30:29.552931    4872 ssh_runner.go:195] Run: which crictl
	I1025 08:30:29.556545    4872 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 08:30:29.580896    4872 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 08:30:29.581077    4872 ssh_runner.go:195] Run: crio --version
	I1025 08:30:29.610402    4872 ssh_runner.go:195] Run: crio --version
	I1025 08:30:29.640415    4872 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 08:30:29.643274    4872 cli_runner.go:164] Run: docker network inspect addons-468341 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 08:30:29.659548    4872 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1025 08:30:29.663136    4872 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 08:30:29.672910    4872 kubeadm.go:883] updating cluster {Name:addons-468341 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-468341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 08:30:29.673029    4872 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 08:30:29.673094    4872 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 08:30:29.711357    4872 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 08:30:29.711380    4872 crio.go:433] Images already preloaded, skipping extraction
	I1025 08:30:29.711437    4872 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 08:30:29.740644    4872 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 08:30:29.740667    4872 cache_images.go:85] Images are preloaded, skipping loading
	I1025 08:30:29.740675    4872 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1025 08:30:29.740761    4872 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-468341 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-468341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 08:30:29.740846    4872 ssh_runner.go:195] Run: crio config
	I1025 08:30:29.798927    4872 cni.go:84] Creating CNI manager for ""
	I1025 08:30:29.798950    4872 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 08:30:29.798971    4872 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 08:30:29.799018    4872 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-468341 NodeName:addons-468341 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 08:30:29.799182    4872 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-468341"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 08:30:29.799254    4872 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 08:30:29.806681    4872 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 08:30:29.806751    4872 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 08:30:29.814141    4872 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1025 08:30:29.826729    4872 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 08:30:29.839252    4872 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1025 08:30:29.851772    4872 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1025 08:30:29.855447    4872 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 08:30:29.865175    4872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 08:30:29.989812    4872 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 08:30:30.039910    4872 certs.go:69] Setting up /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341 for IP: 192.168.49.2
	I1025 08:30:30.039989    4872 certs.go:195] generating shared ca certs ...
	I1025 08:30:30.040022    4872 certs.go:227] acquiring lock for ca certs: {Name:mke81a91ad41248e67f7acba9fb2e71e1c110e7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:30.040216    4872 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21796-2312/.minikube/ca.key
	I1025 08:30:30.411157    4872 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-2312/.minikube/ca.crt ...
	I1025 08:30:30.411205    4872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/ca.crt: {Name:mk52523ff552b275190ee126a048106c7e302f21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:30.411443    4872 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-2312/.minikube/ca.key ...
	I1025 08:30:30.411460    4872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/ca.key: {Name:mkb1a534575f9d829c260998bd8a08f47ad14582 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:30.411558    4872 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.key
	I1025 08:30:30.632380    4872 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.crt ...
	I1025 08:30:30.632411    4872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.crt: {Name:mk73df502898df6e5dc6aa607bc2f5fd24d2e8be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:30.632585    4872 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.key ...
	I1025 08:30:30.632598    4872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.key: {Name:mk3e093eefd488f53bba9abe0f102f8a60ee7e3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:30.632676    4872 certs.go:257] generating profile certs ...
	I1025 08:30:30.632735    4872 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/client.key
	I1025 08:30:30.632753    4872 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/client.crt with IP's: []
	I1025 08:30:31.072729    4872 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/client.crt ...
	I1025 08:30:31.072761    4872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/client.crt: {Name:mked676dc37c5446e46be4bc45a0b4fcac476eaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:31.072954    4872 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/client.key ...
	I1025 08:30:31.072969    4872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/client.key: {Name:mkaed03af2cb2e73eb4ef8d47a01b9f81104a746 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:31.073053    4872 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/apiserver.key.ea2331ce
	I1025 08:30:31.073076    4872 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/apiserver.crt.ea2331ce with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1025 08:30:31.155314    4872 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/apiserver.crt.ea2331ce ...
	I1025 08:30:31.155344    4872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/apiserver.crt.ea2331ce: {Name:mka305d293c5f6355df49608d150e4ab12440176 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:31.155515    4872 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/apiserver.key.ea2331ce ...
	I1025 08:30:31.155530    4872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/apiserver.key.ea2331ce: {Name:mk94a466220e59b4d770674af0f2ae191f9db611 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:31.155614    4872 certs.go:382] copying /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/apiserver.crt.ea2331ce -> /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/apiserver.crt
	I1025 08:30:31.155698    4872 certs.go:386] copying /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/apiserver.key.ea2331ce -> /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/apiserver.key
	I1025 08:30:31.155755    4872 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/proxy-client.key
	I1025 08:30:31.155778    4872 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/proxy-client.crt with IP's: []
	I1025 08:30:32.017070    4872 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/proxy-client.crt ...
	I1025 08:30:32.017102    4872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/proxy-client.crt: {Name:mk80d83f4c97b8faf7d18eb92e95aa4f3b4e33e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:32.017320    4872 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/proxy-client.key ...
	I1025 08:30:32.017340    4872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/proxy-client.key: {Name:mk2b8bed11fe3b414721931dd6020503b901cc9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:32.017532    4872 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 08:30:32.017577    4872 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem (1082 bytes)
	I1025 08:30:32.017608    4872 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem (1123 bytes)
	I1025 08:30:32.017639    4872 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/key.pem (1679 bytes)
	I1025 08:30:32.018244    4872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 08:30:32.039539    4872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 08:30:32.058746    4872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 08:30:32.077322    4872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 08:30:32.095859    4872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1025 08:30:32.113758    4872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 08:30:32.132034    4872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 08:30:32.149544    4872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 08:30:32.166911    4872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 08:30:32.185409    4872 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 08:30:32.198610    4872 ssh_runner.go:195] Run: openssl version
	I1025 08:30:32.204669    4872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 08:30:32.213186    4872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 08:30:32.216819    4872 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1025 08:30:32.216882    4872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 08:30:32.257612    4872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 08:30:32.265715    4872 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 08:30:32.269012    4872 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 08:30:32.269066    4872 kubeadm.go:400] StartCluster: {Name:addons-468341 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-468341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 08:30:32.269141    4872 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 08:30:32.269211    4872 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 08:30:32.295187    4872 cri.go:89] found id: ""
	I1025 08:30:32.295267    4872 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 08:30:32.302833    4872 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 08:30:32.310377    4872 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1025 08:30:32.310452    4872 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 08:30:32.318252    4872 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 08:30:32.318282    4872 kubeadm.go:157] found existing configuration files:
	
	I1025 08:30:32.318368    4872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 08:30:32.325958    4872 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 08:30:32.326047    4872 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 08:30:32.333657    4872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 08:30:32.342062    4872 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 08:30:32.342131    4872 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 08:30:32.350047    4872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 08:30:32.358771    4872 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 08:30:32.358875    4872 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 08:30:32.366193    4872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 08:30:32.373798    4872 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 08:30:32.373865    4872 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 08:30:32.381452    4872 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 08:30:32.429110    4872 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1025 08:30:32.429173    4872 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 08:30:32.461968    4872 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1025 08:30:32.462113    4872 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1025 08:30:32.462157    4872 kubeadm.go:318] OS: Linux
	I1025 08:30:32.462208    4872 kubeadm.go:318] CGROUPS_CPU: enabled
	I1025 08:30:32.462267    4872 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1025 08:30:32.462319    4872 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1025 08:30:32.462373    4872 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1025 08:30:32.462450    4872 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1025 08:30:32.462503    4872 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1025 08:30:32.462555    4872 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1025 08:30:32.462610    4872 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1025 08:30:32.462665    4872 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1025 08:30:32.535970    4872 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 08:30:32.536090    4872 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 08:30:32.536190    4872 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 08:30:32.544683    4872 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 08:30:32.549204    4872 out.go:252]   - Generating certificates and keys ...
	I1025 08:30:32.549379    4872 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 08:30:32.549504    4872 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 08:30:33.773840    4872 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 08:30:34.489262    4872 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 08:30:34.872804    4872 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 08:30:35.270246    4872 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 08:30:35.502390    4872 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 08:30:35.502817    4872 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-468341 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1025 08:30:35.782346    4872 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 08:30:35.782647    4872 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-468341 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1025 08:30:36.107461    4872 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 08:30:36.656361    4872 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 08:30:37.052900    4872 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 08:30:37.053200    4872 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 08:30:37.667727    4872 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 08:30:39.071500    4872 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1025 08:30:40.040619    4872 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 08:30:41.358852    4872 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 08:30:41.548077    4872 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 08:30:41.548675    4872 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 08:30:41.551370    4872 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 08:30:41.554776    4872 out.go:252]   - Booting up control plane ...
	I1025 08:30:41.554889    4872 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 08:30:41.554971    4872 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 08:30:41.555041    4872 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 08:30:41.570563    4872 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 08:30:41.570932    4872 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1025 08:30:41.578075    4872 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1025 08:30:41.578425    4872 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 08:30:41.578474    4872 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1025 08:30:41.709640    4872 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1025 08:30:41.709758    4872 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1025 08:30:42.719465    4872 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.00996888s
	I1025 08:30:42.723089    4872 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1025 08:30:42.723184    4872 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1025 08:30:42.723607    4872 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1025 08:30:42.723697    4872 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1025 08:30:45.364017    4872 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.640432899s
	I1025 08:30:46.830979    4872 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.107832901s
	I1025 08:30:48.726070    4872 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.002866075s
	I1025 08:30:48.749935    4872 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 08:30:48.774508    4872 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 08:30:48.789745    4872 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 08:30:48.790279    4872 kubeadm.go:318] [mark-control-plane] Marking the node addons-468341 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 08:30:48.804766    4872 kubeadm.go:318] [bootstrap-token] Using token: dj0hgz.vala652dlb0y3ydo
	I1025 08:30:48.807784    4872 out.go:252]   - Configuring RBAC rules ...
	I1025 08:30:48.807936    4872 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 08:30:48.814496    4872 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 08:30:48.826919    4872 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 08:30:48.836286    4872 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 08:30:48.840632    4872 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 08:30:48.844849    4872 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 08:30:49.137123    4872 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 08:30:49.621741    4872 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1025 08:30:50.132738    4872 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1025 08:30:50.134435    4872 kubeadm.go:318] 
	I1025 08:30:50.134515    4872 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1025 08:30:50.134521    4872 kubeadm.go:318] 
	I1025 08:30:50.134597    4872 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1025 08:30:50.134602    4872 kubeadm.go:318] 
	I1025 08:30:50.134627    4872 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1025 08:30:50.134685    4872 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 08:30:50.134736    4872 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 08:30:50.134740    4872 kubeadm.go:318] 
	I1025 08:30:50.134810    4872 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1025 08:30:50.134815    4872 kubeadm.go:318] 
	I1025 08:30:50.134861    4872 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 08:30:50.134866    4872 kubeadm.go:318] 
	I1025 08:30:50.134917    4872 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1025 08:30:50.134990    4872 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 08:30:50.135057    4872 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 08:30:50.135062    4872 kubeadm.go:318] 
	I1025 08:30:50.135145    4872 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 08:30:50.135220    4872 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1025 08:30:50.135225    4872 kubeadm.go:318] 
	I1025 08:30:50.135308    4872 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token dj0hgz.vala652dlb0y3ydo \
	I1025 08:30:50.135409    4872 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:e54eda66e4b00d0a990c315bf332782d8922fe6750f8b2bc2791b23f9457095b \
	I1025 08:30:50.135430    4872 kubeadm.go:318] 	--control-plane 
	I1025 08:30:50.135434    4872 kubeadm.go:318] 
	I1025 08:30:50.135517    4872 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1025 08:30:50.135522    4872 kubeadm.go:318] 
	I1025 08:30:50.135603    4872 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token dj0hgz.vala652dlb0y3ydo \
	I1025 08:30:50.135704    4872 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:e54eda66e4b00d0a990c315bf332782d8922fe6750f8b2bc2791b23f9457095b 
	I1025 08:30:50.138930    4872 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1025 08:30:50.139166    4872 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1025 08:30:50.139275    4872 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 08:30:50.139292    4872 cni.go:84] Creating CNI manager for ""
	I1025 08:30:50.139299    4872 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 08:30:50.142550    4872 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1025 08:30:50.145511    4872 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 08:30:50.149264    4872 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1025 08:30:50.149285    4872 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1025 08:30:50.166111    4872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 08:30:50.445957    4872 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 08:30:50.446084    4872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 08:30:50.446118    4872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-468341 minikube.k8s.io/updated_at=2025_10_25T08_30_50_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c3de5ce1075e776fa55a26fe8396669cc53a4373 minikube.k8s.io/name=addons-468341 minikube.k8s.io/primary=true
	I1025 08:30:50.639309    4872 ops.go:34] apiserver oom_adj: -16
	I1025 08:30:50.639413    4872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 08:30:51.139905    4872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 08:30:51.639554    4872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 08:30:52.140047    4872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 08:30:52.640091    4872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 08:30:53.139854    4872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 08:30:53.640464    4872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 08:30:54.140467    4872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 08:30:54.640436    4872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 08:30:54.744684    4872 kubeadm.go:1113] duration metric: took 4.298659374s to wait for elevateKubeSystemPrivileges
	I1025 08:30:54.744711    4872 kubeadm.go:402] duration metric: took 22.475648638s to StartCluster
	I1025 08:30:54.744728    4872 settings.go:142] acquiring lock: {Name:mk0d8a31751f5e359928da7d7271367bd4b397fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:54.744840    4872 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21796-2312/kubeconfig
	I1025 08:30:54.745231    4872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/kubeconfig: {Name:mk29749a0edbb941c188588ea6aef25a517ab571 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:30:54.745422    4872 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 08:30:54.745561    4872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 08:30:54.745802    4872 config.go:182] Loaded profile config "addons-468341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:30:54.745832    4872 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1025 08:30:54.745918    4872 addons.go:69] Setting yakd=true in profile "addons-468341"
	I1025 08:30:54.745938    4872 addons.go:238] Setting addon yakd=true in "addons-468341"
	I1025 08:30:54.745959    4872 host.go:66] Checking if "addons-468341" exists ...
	I1025 08:30:54.746513    4872 cli_runner.go:164] Run: docker container inspect addons-468341 --format={{.State.Status}}
	I1025 08:30:54.746814    4872 addons.go:69] Setting inspektor-gadget=true in profile "addons-468341"
	I1025 08:30:54.746839    4872 addons.go:238] Setting addon inspektor-gadget=true in "addons-468341"
	I1025 08:30:54.746869    4872 host.go:66] Checking if "addons-468341" exists ...
	I1025 08:30:54.747271    4872 cli_runner.go:164] Run: docker container inspect addons-468341 --format={{.State.Status}}
	I1025 08:30:54.747521    4872 addons.go:69] Setting metrics-server=true in profile "addons-468341"
	I1025 08:30:54.747545    4872 addons.go:238] Setting addon metrics-server=true in "addons-468341"
	I1025 08:30:54.747569    4872 host.go:66] Checking if "addons-468341" exists ...
	I1025 08:30:54.748033    4872 cli_runner.go:164] Run: docker container inspect addons-468341 --format={{.State.Status}}
	I1025 08:30:54.751349    4872 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-468341"
	I1025 08:30:54.751667    4872 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-468341"
	I1025 08:30:54.751701    4872 host.go:66] Checking if "addons-468341" exists ...
	I1025 08:30:54.752141    4872 cli_runner.go:164] Run: docker container inspect addons-468341 --format={{.State.Status}}
	I1025 08:30:54.751512    4872 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-468341"
	I1025 08:30:54.755922    4872 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-468341"
	I1025 08:30:54.755997    4872 host.go:66] Checking if "addons-468341" exists ...
	I1025 08:30:54.751518    4872 addons.go:69] Setting cloud-spanner=true in profile "addons-468341"
	I1025 08:30:54.751524    4872 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-468341"
	I1025 08:30:54.751528    4872 addons.go:69] Setting default-storageclass=true in profile "addons-468341"
	I1025 08:30:54.751531    4872 addons.go:69] Setting gcp-auth=true in profile "addons-468341"
	I1025 08:30:54.751534    4872 addons.go:69] Setting ingress=true in profile "addons-468341"
	I1025 08:30:54.751537    4872 addons.go:69] Setting ingress-dns=true in profile "addons-468341"
	I1025 08:30:54.751590    4872 addons.go:69] Setting registry=true in profile "addons-468341"
	I1025 08:30:54.751595    4872 addons.go:69] Setting registry-creds=true in profile "addons-468341"
	I1025 08:30:54.751599    4872 addons.go:69] Setting storage-provisioner=true in profile "addons-468341"
	I1025 08:30:54.751602    4872 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-468341"
	I1025 08:30:54.751605    4872 addons.go:69] Setting volcano=true in profile "addons-468341"
	I1025 08:30:54.751608    4872 addons.go:69] Setting volumesnapshots=true in profile "addons-468341"
	I1025 08:30:54.751641    4872 out.go:179] * Verifying Kubernetes components...
	I1025 08:30:54.759989    4872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 08:30:54.763662    4872 addons.go:238] Setting addon ingress-dns=true in "addons-468341"
	I1025 08:30:54.763782    4872 host.go:66] Checking if "addons-468341" exists ...
	I1025 08:30:54.764337    4872 cli_runner.go:164] Run: docker container inspect addons-468341 --format={{.State.Status}}
	I1025 08:30:54.764702    4872 addons.go:238] Setting addon registry=true in "addons-468341"
	I1025 08:30:54.764770    4872 host.go:66] Checking if "addons-468341" exists ...
	I1025 08:30:54.774544    4872 cli_runner.go:164] Run: docker container inspect addons-468341 --format={{.State.Status}}
	I1025 08:30:54.764919    4872 addons.go:238] Setting addon registry-creds=true in "addons-468341"
	I1025 08:30:54.784826    4872 host.go:66] Checking if "addons-468341" exists ...
	I1025 08:30:54.764930    4872 addons.go:238] Setting addon storage-provisioner=true in "addons-468341"
	I1025 08:30:54.787270    4872 host.go:66] Checking if "addons-468341" exists ...
	I1025 08:30:54.787853    4872 cli_runner.go:164] Run: docker container inspect addons-468341 --format={{.State.Status}}
	I1025 08:30:54.764939    4872 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-468341"
	I1025 08:30:54.796460    4872 cli_runner.go:164] Run: docker container inspect addons-468341 --format={{.State.Status}}
	I1025 08:30:54.796862    4872 cli_runner.go:164] Run: docker container inspect addons-468341 --format={{.State.Status}}
	I1025 08:30:54.764953    4872 addons.go:238] Setting addon volcano=true in "addons-468341"
	I1025 08:30:54.803935    4872 host.go:66] Checking if "addons-468341" exists ...
	I1025 08:30:54.804533    4872 cli_runner.go:164] Run: docker container inspect addons-468341 --format={{.State.Status}}
	I1025 08:30:54.764959    4872 addons.go:238] Setting addon volumesnapshots=true in "addons-468341"
	I1025 08:30:54.819884    4872 host.go:66] Checking if "addons-468341" exists ...
	I1025 08:30:54.820408    4872 cli_runner.go:164] Run: docker container inspect addons-468341 --format={{.State.Status}}
	I1025 08:30:54.765004    4872 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-468341"
	I1025 08:30:54.849174    4872 host.go:66] Checking if "addons-468341" exists ...
	I1025 08:30:54.849641    4872 cli_runner.go:164] Run: docker container inspect addons-468341 --format={{.State.Status}}
	I1025 08:30:54.765417    4872 cli_runner.go:164] Run: docker container inspect addons-468341 --format={{.State.Status}}
	I1025 08:30:54.875695    4872 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1025 08:30:54.765426    4872 addons.go:238] Setting addon cloud-spanner=true in "addons-468341"
	I1025 08:30:54.765439    4872 mustload.go:65] Loading cluster: addons-468341
	I1025 08:30:54.765450    4872 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-468341"
	I1025 08:30:54.765459    4872 addons.go:238] Setting addon ingress=true in "addons-468341"
	I1025 08:30:54.884387    4872 host.go:66] Checking if "addons-468341" exists ...
	I1025 08:30:54.885033    4872 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1025 08:30:54.897134    4872 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1025 08:30:54.897219    4872 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1025 08:30:54.897315    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:30:54.902301    4872 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-468341"
	I1025 08:30:54.922603    4872 host.go:66] Checking if "addons-468341" exists ...
	I1025 08:30:54.923154    4872 cli_runner.go:164] Run: docker container inspect addons-468341 --format={{.State.Status}}
	I1025 08:30:54.936393    4872 host.go:66] Checking if "addons-468341" exists ...
	I1025 08:30:54.937024    4872 cli_runner.go:164] Run: docker container inspect addons-468341 --format={{.State.Status}}
	I1025 08:30:54.943717    4872 cli_runner.go:164] Run: docker container inspect addons-468341 --format={{.State.Status}}
	I1025 08:30:54.948118    4872 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 08:30:54.970133    4872 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 08:30:54.970158    4872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 08:30:54.970248    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:30:54.948326    4872 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1025 08:30:54.995322    4872 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1025 08:30:54.995392    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:30:54.948362    4872 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1025 08:30:55.033447    4872 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1025 08:30:55.033477    4872 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1025 08:30:55.033557    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:30:55.051166    4872 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1025 08:30:54.966530    4872 config.go:182] Loaded profile config "addons-468341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:30:55.061907    4872 cli_runner.go:164] Run: docker container inspect addons-468341 --format={{.State.Status}}
	I1025 08:30:54.966813    4872 cli_runner.go:164] Run: docker container inspect addons-468341 --format={{.State.Status}}
	I1025 08:30:55.061167    4872 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1025 08:30:55.061221    4872 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1025 08:30:55.083831    4872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1025 08:30:55.083902    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	W1025 08:30:55.061439    4872 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1025 08:30:55.101653    4872 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1025 08:30:55.101678    4872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1025 08:30:55.101758    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:30:55.132368    4872 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1025 08:30:55.136201    4872 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1025 08:30:55.137849    4872 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1025 08:30:55.138217    4872 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1025 08:30:55.140282    4872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1025 08:30:55.140370    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:30:55.163271    4872 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1025 08:30:55.163299    4872 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1025 08:30:55.163367    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:30:55.189852    4872 out.go:179]   - Using image docker.io/registry:3.0.0
	I1025 08:30:55.192228    4872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/addons-468341/id_rsa Username:docker}
	I1025 08:30:55.198183    4872 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1025 08:30:55.198208    4872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1025 08:30:55.198291    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:30:55.214177    4872 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1025 08:30:55.214426    4872 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1025 08:30:55.218228    4872 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1025 08:30:55.222124    4872 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1025 08:30:55.222150    4872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1025 08:30:55.222227    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:30:55.240420    4872 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1025 08:30:55.243715    4872 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1025 08:30:55.246827    4872 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1025 08:30:55.250657    4872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/addons-468341/id_rsa Username:docker}
	I1025 08:30:55.258060    4872 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1025 08:30:55.258139    4872 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1025 08:30:55.258155    4872 out.go:179]   - Using image docker.io/busybox:stable
	I1025 08:30:55.258229    4872 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1025 08:30:55.264356    4872 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1025 08:30:55.264378    4872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1025 08:30:55.264522    4872 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1025 08:30:55.264662    4872 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1025 08:30:55.264672    4872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1025 08:30:55.264735    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:30:55.264925    4872 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1025 08:30:55.265084    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:30:55.295976    4872 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1025 08:30:55.297416    4872 host.go:66] Checking if "addons-468341" exists ...
	I1025 08:30:55.309549    4872 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1025 08:30:55.316270    4872 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1025 08:30:55.316292    4872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1025 08:30:55.316373    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:30:55.352953    4872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/addons-468341/id_rsa Username:docker}
	I1025 08:30:55.358957    4872 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1025 08:30:55.363040    4872 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1025 08:30:55.363067    4872 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1025 08:30:55.363147    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:30:55.381442    4872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/addons-468341/id_rsa Username:docker}
	I1025 08:30:55.383247    4872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/addons-468341/id_rsa Username:docker}
	I1025 08:30:55.384139    4872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/addons-468341/id_rsa Username:docker}
	I1025 08:30:55.384643    4872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/addons-468341/id_rsa Username:docker}
	I1025 08:30:55.415905    4872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/addons-468341/id_rsa Username:docker}
	I1025 08:30:55.438605    4872 addons.go:238] Setting addon default-storageclass=true in "addons-468341"
	I1025 08:30:55.438647    4872 host.go:66] Checking if "addons-468341" exists ...
	I1025 08:30:55.439071    4872 cli_runner.go:164] Run: docker container inspect addons-468341 --format={{.State.Status}}
	I1025 08:30:55.457815    4872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 08:30:55.458103    4872 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 08:30:55.497621    4872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/addons-468341/id_rsa Username:docker}
	I1025 08:30:55.498722    4872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/addons-468341/id_rsa Username:docker}
	I1025 08:30:55.499794    4872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/addons-468341/id_rsa Username:docker}
	I1025 08:30:55.505813    4872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/addons-468341/id_rsa Username:docker}
	I1025 08:30:55.508387    4872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/addons-468341/id_rsa Username:docker}
	I1025 08:30:55.522112    4872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/addons-468341/id_rsa Username:docker}
	I1025 08:30:55.533695    4872 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 08:30:55.533715    4872 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 08:30:55.533777    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:30:55.574686    4872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/addons-468341/id_rsa Username:docker}
	I1025 08:30:55.937304    4872 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1025 08:30:55.937330    4872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1025 08:30:55.998374    4872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1025 08:30:56.047206    4872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1025 08:30:56.063495    4872 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1025 08:30:56.063526    4872 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1025 08:30:56.104433    4872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1025 08:30:56.111561    4872 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:30:56.111585    4872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1025 08:30:56.121701    4872 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1025 08:30:56.121728    4872 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1025 08:30:56.189433    4872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1025 08:30:56.203211    4872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 08:30:56.213914    4872 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 08:30:56.213938    4872 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1025 08:30:56.218649    4872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1025 08:30:56.246517    4872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1025 08:30:56.273796    4872 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1025 08:30:56.273821    4872 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1025 08:30:56.305896    4872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1025 08:30:56.352669    4872 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1025 08:30:56.352774    4872 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1025 08:30:56.412107    4872 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1025 08:30:56.412196    4872 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1025 08:30:56.415273    4872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:30:56.431546    4872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 08:30:56.499455    4872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 08:30:56.529552    4872 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1025 08:30:56.529629    4872 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1025 08:30:56.549398    4872 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1025 08:30:56.549502    4872 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1025 08:30:56.552126    4872 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1025 08:30:56.552197    4872 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1025 08:30:56.654178    4872 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1025 08:30:56.654277    4872 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1025 08:30:56.735595    4872 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1025 08:30:56.735679    4872 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1025 08:30:56.753089    4872 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1025 08:30:56.753174    4872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1025 08:30:56.789295    4872 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1025 08:30:56.789374    4872 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1025 08:30:56.888624    4872 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1025 08:30:56.888712    4872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1025 08:30:56.912954    4872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1025 08:30:57.008826    4872 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1025 08:30:57.008908    4872 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1025 08:30:57.102104    4872 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1025 08:30:57.102207    4872 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1025 08:30:57.139818    4872 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1025 08:30:57.139846    4872 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1025 08:30:57.147506    4872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1025 08:30:57.244513    4872 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1025 08:30:57.244599    4872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1025 08:30:57.259419    4872 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1025 08:30:57.259443    4872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1025 08:30:57.402869    4872 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.944712903s)
	I1025 08:30:57.402961    4872 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.945072461s)
	I1025 08:30:57.402983    4872 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1025 08:30:57.404548    4872 node_ready.go:35] waiting up to 6m0s for node "addons-468341" to be "Ready" ...
	I1025 08:30:57.550738    4872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1025 08:30:57.556191    4872 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1025 08:30:57.556216    4872 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1025 08:30:57.622560    4872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.624149965s)
	I1025 08:30:57.622630    4872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.57540325s)
	I1025 08:30:57.816126    4872 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1025 08:30:57.816149    4872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1025 08:30:57.906960    4872 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-468341" context rescaled to 1 replicas
	I1025 08:30:58.005904    4872 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1025 08:30:58.005928    4872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1025 08:30:58.131476    4872 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1025 08:30:58.131506    4872 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1025 08:30:58.244951    4872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1025 08:30:59.473610    4872 node_ready.go:57] node "addons-468341" has "Ready":"False" status (will retry)
	I1025 08:30:59.800859    4872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.69639027s)
	I1025 08:30:59.800928    4872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.611470994s)
	I1025 08:31:00.475994    4872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.272750943s)
	I1025 08:31:01.324086    4872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.105404176s)
	I1025 08:31:01.324615    4872 addons.go:479] Verifying addon ingress=true in "addons-468341"
	I1025 08:31:01.324212    4872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.07767063s)
	I1025 08:31:01.324239    4872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (5.018263649s)
	I1025 08:31:01.324298    4872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.909004284s)
	W1025 08:31:01.324864    4872 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:01.324896    4872 retry.go:31] will retry after 323.692618ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:01.324312    4872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.892747197s)
	I1025 08:31:01.324360    4872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.824873829s)
	I1025 08:31:01.324977    4872 addons.go:479] Verifying addon metrics-server=true in "addons-468341"
	I1025 08:31:01.324380    4872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.411332362s)
	I1025 08:31:01.324988    4872 addons.go:479] Verifying addon registry=true in "addons-468341"
	I1025 08:31:01.324409    4872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.176884827s)
	I1025 08:31:01.324479    4872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.773715492s)
	W1025 08:31:01.325810    4872 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1025 08:31:01.325827    4872 retry.go:31] will retry after 181.32551ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1025 08:31:01.329079    4872 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-468341 service yakd-dashboard -n yakd-dashboard
	
	I1025 08:31:01.329229    4872 out.go:179] * Verifying registry addon...
	I1025 08:31:01.329273    4872 out.go:179] * Verifying ingress addon...
	I1025 08:31:01.333747    4872 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1025 08:31:01.335338    4872 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1025 08:31:01.342365    4872 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1025 08:31:01.342384    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1025 08:31:01.348735    4872 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1025 08:31:01.349094    4872 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1025 08:31:01.349132    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:01.507862    4872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1025 08:31:01.605791    4872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.360760737s)
	I1025 08:31:01.605867    4872 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-468341"
	I1025 08:31:01.609218    4872 out.go:179] * Verifying csi-hostpath-driver addon...
	I1025 08:31:01.613541    4872 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1025 08:31:01.623840    4872 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1025 08:31:01.623865    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:01.649141    4872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:31:01.839462    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:01.839689    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 08:31:01.912925    4872 node_ready.go:57] node "addons-468341" has "Ready":"False" status (will retry)
	I1025 08:31:02.117756    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:02.339673    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:02.339919    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:02.617111    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:02.838493    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:02.838653    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:02.908100    4872 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1025 08:31:02.908192    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:31:02.927519    4872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/addons-468341/id_rsa Username:docker}
	I1025 08:31:03.042942    4872 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1025 08:31:03.055827    4872 addons.go:238] Setting addon gcp-auth=true in "addons-468341"
	I1025 08:31:03.055871    4872 host.go:66] Checking if "addons-468341" exists ...
	I1025 08:31:03.056337    4872 cli_runner.go:164] Run: docker container inspect addons-468341 --format={{.State.Status}}
	I1025 08:31:03.074101    4872 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1025 08:31:03.074155    4872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:31:03.094787    4872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/addons-468341/id_rsa Username:docker}
	I1025 08:31:03.118667    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:03.338052    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:03.338482    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:03.617346    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:03.837724    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:03.839386    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:04.117864    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:04.280816    4872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.772906338s)
	I1025 08:31:04.280943    4872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.631776634s)
	W1025 08:31:04.280966    4872 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:04.280982    4872 retry.go:31] will retry after 371.165011ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:04.281016    4872 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.206893205s)
	I1025 08:31:04.284217    4872 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1025 08:31:04.287138    4872 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1025 08:31:04.289973    4872 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1025 08:31:04.290010    4872 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1025 08:31:04.302876    4872 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1025 08:31:04.302897    4872 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1025 08:31:04.315804    4872 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1025 08:31:04.315827    4872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1025 08:31:04.329120    4872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1025 08:31:04.338434    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:04.338845    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1025 08:31:04.410366    4872 node_ready.go:57] node "addons-468341" has "Ready":"False" status (will retry)
	I1025 08:31:04.617429    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:04.652788    4872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:31:04.845518    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:04.851962    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:04.878589    4872 addons.go:479] Verifying addon gcp-auth=true in "addons-468341"
	I1025 08:31:04.881611    4872 out.go:179] * Verifying gcp-auth addon...
	I1025 08:31:04.885227    4872 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1025 08:31:04.946450    4872 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1025 08:31:04.946489    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:05.117321    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:05.340342    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:05.341320    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:05.388795    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 08:31:05.607419    4872 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:05.607451    4872 retry.go:31] will retry after 827.139043ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:05.617394    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:05.837742    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:05.838553    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:05.888468    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:06.117113    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:06.337262    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:06.337913    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:06.388837    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:06.435360    4872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:31:06.617298    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:06.837670    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:06.839192    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:06.888497    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 08:31:06.907915    4872 node_ready.go:57] node "addons-468341" has "Ready":"False" status (will retry)
	I1025 08:31:07.116798    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 08:31:07.242309    4872 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:07.242337    4872 retry.go:31] will retry after 1.164224313s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:07.338297    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:07.338580    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:07.388601    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:07.617299    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:07.837442    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:07.838555    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:07.888764    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:08.116552    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:08.337296    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:08.338228    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:08.388624    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:08.407161    4872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:31:08.630411    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:08.837609    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:08.840400    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:08.888658    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:09.116894    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 08:31:09.209359    4872 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:09.209435    4872 retry.go:31] will retry after 1.876878779s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:09.337470    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:09.338304    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:09.388919    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 08:31:09.407578    4872 node_ready.go:57] node "addons-468341" has "Ready":"False" status (will retry)
	I1025 08:31:09.616735    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:09.837613    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:09.838724    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:09.888744    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:10.117323    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:10.338521    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:10.339040    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:10.388981    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:10.616726    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:10.837021    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:10.839069    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:10.888804    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:11.086827    4872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:31:11.117288    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:11.337598    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:11.340136    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:11.389163    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 08:31:11.408543    4872 node_ready.go:57] node "addons-468341" has "Ready":"False" status (will retry)
	I1025 08:31:11.616736    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:11.837577    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:11.838148    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:11.888135    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 08:31:11.897885    4872 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:11.897916    4872 retry.go:31] will retry after 2.028497252s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:12.116658    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:12.336509    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:12.338824    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:12.388844    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:12.617272    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:12.837272    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:12.838491    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:12.888384    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:13.117282    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:13.337290    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:13.338175    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:13.388817    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:13.616426    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:13.837727    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:13.839301    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:13.888203    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 08:31:13.907980    4872 node_ready.go:57] node "addons-468341" has "Ready":"False" status (will retry)
	I1025 08:31:13.927276    4872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:31:14.116217    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:14.338967    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:14.340070    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:14.389452    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:14.617676    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 08:31:14.739971    4872 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:14.740012    4872 retry.go:31] will retry after 2.23204681s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:14.837018    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:14.839192    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:14.887849    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:15.117571    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:15.337241    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:15.339103    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:15.388596    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:15.617160    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:15.837222    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:15.838570    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:15.888445    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 08:31:15.908329    4872 node_ready.go:57] node "addons-468341" has "Ready":"False" status (will retry)
	I1025 08:31:16.117322    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:16.337889    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:16.338856    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:16.388780    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:16.617326    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:16.837199    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:16.838395    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:16.888094    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:16.972776    4872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:31:17.117231    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:17.338883    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:17.339856    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:17.388979    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:17.618768    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 08:31:17.807683    4872 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:17.807776    4872 retry.go:31] will retry after 2.568380534s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:17.836622    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:17.839103    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:17.888248    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:18.117303    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:18.337076    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:18.338485    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:18.388713    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 08:31:18.407231    4872 node_ready.go:57] node "addons-468341" has "Ready":"False" status (will retry)
	I1025 08:31:18.617373    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:18.838910    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:18.839324    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:18.887938    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:19.116905    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:19.336878    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:19.339110    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:19.389019    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:19.617378    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:19.837631    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:19.838827    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:19.888545    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:20.117670    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:20.338754    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:20.339087    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:20.377155    4872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:31:20.388311    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 08:31:20.408220    4872 node_ready.go:57] node "addons-468341" has "Ready":"False" status (will retry)
	I1025 08:31:20.618358    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:20.840319    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:20.841410    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:20.888523    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:21.117222    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 08:31:21.202864    4872 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:21.202902    4872 retry.go:31] will retry after 8.402756938s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:21.336988    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:21.339196    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:21.388201    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:21.617220    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:21.837477    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:21.839023    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:21.888800    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:22.117004    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:22.336891    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:22.338978    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:22.388712    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:22.616529    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:22.837363    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:22.838786    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:22.888370    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 08:31:22.908364    4872 node_ready.go:57] node "addons-468341" has "Ready":"False" status (will retry)
	I1025 08:31:23.117142    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:23.337705    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:23.338517    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:23.388247    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:23.616287    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:23.837413    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:23.838865    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:23.889058    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:24.116987    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:24.337108    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:24.338036    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:24.388673    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:24.616744    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:24.836942    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:24.839286    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:24.888163    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:25.117329    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:25.337286    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:25.339004    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:25.388689    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 08:31:25.407272    4872 node_ready.go:57] node "addons-468341" has "Ready":"False" status (will retry)
	I1025 08:31:25.617136    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:25.837135    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:25.838860    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:25.888795    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:26.117522    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:26.337158    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:26.338980    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:26.388817    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:26.617243    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:26.836853    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:26.838175    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:26.887892    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:27.116865    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:27.338186    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:27.339436    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:27.388097    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 08:31:27.408037    4872 node_ready.go:57] node "addons-468341" has "Ready":"False" status (will retry)
	I1025 08:31:27.617022    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:27.838555    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:27.838719    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:27.888434    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:28.117609    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:28.336839    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:28.339390    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:28.388848    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:28.616812    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:28.837008    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:28.839343    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:28.888530    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:29.117367    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:29.337041    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:29.338458    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:29.388171    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 08:31:29.408205    4872 node_ready.go:57] node "addons-468341" has "Ready":"False" status (will retry)
	I1025 08:31:29.606490    4872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:31:29.617110    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:29.839315    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:29.839795    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:29.888918    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:30.117357    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:30.339164    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:30.339615    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:30.389037    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 08:31:30.489174    4872 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:30.489207    4872 retry.go:31] will retry after 8.946405924s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:30.617476    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:30.838109    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:30.838759    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:30.888764    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:31.116619    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:31.338537    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:31.338037    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:31.389791    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 08:31:31.408601    4872 node_ready.go:57] node "addons-468341" has "Ready":"False" status (will retry)
	I1025 08:31:31.616751    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:31.836677    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:31.839114    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:31.888849    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:32.116376    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:32.337582    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:32.338331    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:32.389100    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:32.616866    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:32.836553    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:32.838553    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:32.888412    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:33.116795    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:33.337724    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:33.338969    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:33.388832    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:33.616626    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:33.837443    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:33.839858    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:33.888461    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 08:31:33.908371    4872 node_ready.go:57] node "addons-468341" has "Ready":"False" status (will retry)
	I1025 08:31:34.117896    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:34.337350    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:34.339769    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:34.388636    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:34.616791    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:34.836715    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:34.838951    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:34.888774    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:35.116816    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:35.337931    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:35.338497    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:35.388583    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:35.617614    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:35.837856    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:35.838688    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:35.888243    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:36.117298    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:36.341828    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:36.342306    4872 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1025 08:31:36.342371    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:36.392928    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:36.418408    4872 node_ready.go:49] node "addons-468341" is "Ready"
	I1025 08:31:36.418484    4872 node_ready.go:38] duration metric: took 39.013899863s for node "addons-468341" to be "Ready" ...
	I1025 08:31:36.418521    4872 api_server.go:52] waiting for apiserver process to appear ...
	I1025 08:31:36.418607    4872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 08:31:36.437095    4872 api_server.go:72] duration metric: took 41.69164123s to wait for apiserver process to appear ...
	I1025 08:31:36.437120    4872 api_server.go:88] waiting for apiserver healthz status ...
	I1025 08:31:36.437175    4872 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1025 08:31:36.448230    4872 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1025 08:31:36.449282    4872 api_server.go:141] control plane version: v1.34.1
	I1025 08:31:36.449322    4872 api_server.go:131] duration metric: took 12.194486ms to wait for apiserver health ...
	I1025 08:31:36.449331    4872 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 08:31:36.459279    4872 system_pods.go:59] 19 kube-system pods found
	I1025 08:31:36.459323    4872 system_pods.go:61] "coredns-66bc5c9577-dh6v4" [a83a218d-bfcd-4174-955e-eeb9264cb12f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 08:31:36.459366    4872 system_pods.go:61] "csi-hostpath-attacher-0" [adca97a9-465e-4053-8da2-1647455bd10d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1025 08:31:36.459382    4872 system_pods.go:61] "csi-hostpath-resizer-0" [7efafc69-47a7-4d91-9d6e-660223b9207b] Pending
	I1025 08:31:36.459388    4872 system_pods.go:61] "csi-hostpathplugin-wm2b7" [829d1b6b-726d-4eb4-b18e-0e6b86c1755d] Pending
	I1025 08:31:36.459398    4872 system_pods.go:61] "etcd-addons-468341" [9f5fd6e8-de6d-49b1-b102-383a1814fab7] Running
	I1025 08:31:36.459403    4872 system_pods.go:61] "kindnet-rb4dc" [ff4c343e-ba3a-4ceb-adc7-a42c595072c7] Running
	I1025 08:31:36.459408    4872 system_pods.go:61] "kube-apiserver-addons-468341" [29bef051-6bb7-49fc-8e31-25ffc0ace270] Running
	I1025 08:31:36.459429    4872 system_pods.go:61] "kube-controller-manager-addons-468341" [b34bd6e5-2d40-4ad2-a111-ee861c618f57] Running
	I1025 08:31:36.459441    4872 system_pods.go:61] "kube-ingress-dns-minikube" [fef98f06-32c6-44e6-8a25-dce9feb2bc80] Pending
	I1025 08:31:36.459446    4872 system_pods.go:61] "kube-proxy-58zqr" [3d51ef2f-f60c-41f7-a794-69cb67431709] Running
	I1025 08:31:36.459451    4872 system_pods.go:61] "kube-scheduler-addons-468341" [890a9dc4-f6dc-4545-a4bb-15356976c393] Running
	I1025 08:31:36.459459    4872 system_pods.go:61] "metrics-server-85b7d694d7-rqmn4" [1709dfd1-357e-496a-98b7-205be9cae357] Pending
	I1025 08:31:36.459465    4872 system_pods.go:61] "nvidia-device-plugin-daemonset-w5ht9" [05248aa9-d292-4130-b10d-c632220baebb] Pending
	I1025 08:31:36.459476    4872 system_pods.go:61] "registry-6b586f9694-bl9lz" [1d570e3f-1a7f-47f7-9a56-92f7a27efe03] Pending
	I1025 08:31:36.459483    4872 system_pods.go:61] "registry-creds-764b6fb674-q5vpt" [0ac9c422-007f-4643-8aaa-fa94a38fc826] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1025 08:31:36.459493    4872 system_pods.go:61] "registry-proxy-xjrqf" [8a4de784-ff9c-48be-a85b-956955a98f06] Pending
	I1025 08:31:36.459518    4872 system_pods.go:61] "snapshot-controller-7d9fbc56b8-brfpz" [b6abd38b-f9b9-445e-b732-967716b4219d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 08:31:36.459528    4872 system_pods.go:61] "snapshot-controller-7d9fbc56b8-kwsbg" [9f0cc055-bbc4-44b1-b9dd-41670fa5d058] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 08:31:36.459536    4872 system_pods.go:61] "storage-provisioner" [18b717a7-f9d4-4696-9839-6564fcdc4480] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 08:31:36.459546    4872 system_pods.go:74] duration metric: took 10.208732ms to wait for pod list to return data ...
	I1025 08:31:36.459561    4872 default_sa.go:34] waiting for default service account to be created ...
	I1025 08:31:36.479484    4872 default_sa.go:45] found service account: "default"
	I1025 08:31:36.479515    4872 default_sa.go:55] duration metric: took 19.946627ms for default service account to be created ...
	I1025 08:31:36.479525    4872 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 08:31:36.565777    4872 system_pods.go:86] 19 kube-system pods found
	I1025 08:31:36.565813    4872 system_pods.go:89] "coredns-66bc5c9577-dh6v4" [a83a218d-bfcd-4174-955e-eeb9264cb12f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 08:31:36.565822    4872 system_pods.go:89] "csi-hostpath-attacher-0" [adca97a9-465e-4053-8da2-1647455bd10d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1025 08:31:36.565884    4872 system_pods.go:89] "csi-hostpath-resizer-0" [7efafc69-47a7-4d91-9d6e-660223b9207b] Pending
	I1025 08:31:36.565889    4872 system_pods.go:89] "csi-hostpathplugin-wm2b7" [829d1b6b-726d-4eb4-b18e-0e6b86c1755d] Pending
	I1025 08:31:36.565893    4872 system_pods.go:89] "etcd-addons-468341" [9f5fd6e8-de6d-49b1-b102-383a1814fab7] Running
	I1025 08:31:36.565906    4872 system_pods.go:89] "kindnet-rb4dc" [ff4c343e-ba3a-4ceb-adc7-a42c595072c7] Running
	I1025 08:31:36.565911    4872 system_pods.go:89] "kube-apiserver-addons-468341" [29bef051-6bb7-49fc-8e31-25ffc0ace270] Running
	I1025 08:31:36.565928    4872 system_pods.go:89] "kube-controller-manager-addons-468341" [b34bd6e5-2d40-4ad2-a111-ee861c618f57] Running
	I1025 08:31:36.565942    4872 system_pods.go:89] "kube-ingress-dns-minikube" [fef98f06-32c6-44e6-8a25-dce9feb2bc80] Pending
	I1025 08:31:36.565946    4872 system_pods.go:89] "kube-proxy-58zqr" [3d51ef2f-f60c-41f7-a794-69cb67431709] Running
	I1025 08:31:36.565971    4872 system_pods.go:89] "kube-scheduler-addons-468341" [890a9dc4-f6dc-4545-a4bb-15356976c393] Running
	I1025 08:31:36.566000    4872 system_pods.go:89] "metrics-server-85b7d694d7-rqmn4" [1709dfd1-357e-496a-98b7-205be9cae357] Pending
	I1025 08:31:36.566006    4872 system_pods.go:89] "nvidia-device-plugin-daemonset-w5ht9" [05248aa9-d292-4130-b10d-c632220baebb] Pending
	I1025 08:31:36.566009    4872 system_pods.go:89] "registry-6b586f9694-bl9lz" [1d570e3f-1a7f-47f7-9a56-92f7a27efe03] Pending
	I1025 08:31:36.566023    4872 system_pods.go:89] "registry-creds-764b6fb674-q5vpt" [0ac9c422-007f-4643-8aaa-fa94a38fc826] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1025 08:31:36.566028    4872 system_pods.go:89] "registry-proxy-xjrqf" [8a4de784-ff9c-48be-a85b-956955a98f06] Pending
	I1025 08:31:36.566043    4872 system_pods.go:89] "snapshot-controller-7d9fbc56b8-brfpz" [b6abd38b-f9b9-445e-b732-967716b4219d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 08:31:36.566050    4872 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kwsbg" [9f0cc055-bbc4-44b1-b9dd-41670fa5d058] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 08:31:36.566057    4872 system_pods.go:89] "storage-provisioner" [18b717a7-f9d4-4696-9839-6564fcdc4480] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 08:31:36.566082    4872 retry.go:31] will retry after 271.33589ms: missing components: kube-dns
	I1025 08:31:36.694257    4872 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1025 08:31:36.694283    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:36.847515    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:36.849293    4872 system_pods.go:86] 19 kube-system pods found
	I1025 08:31:36.849325    4872 system_pods.go:89] "coredns-66bc5c9577-dh6v4" [a83a218d-bfcd-4174-955e-eeb9264cb12f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 08:31:36.849334    4872 system_pods.go:89] "csi-hostpath-attacher-0" [adca97a9-465e-4053-8da2-1647455bd10d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1025 08:31:36.849364    4872 system_pods.go:89] "csi-hostpath-resizer-0" [7efafc69-47a7-4d91-9d6e-660223b9207b] Pending
	I1025 08:31:36.849378    4872 system_pods.go:89] "csi-hostpathplugin-wm2b7" [829d1b6b-726d-4eb4-b18e-0e6b86c1755d] Pending
	I1025 08:31:36.849382    4872 system_pods.go:89] "etcd-addons-468341" [9f5fd6e8-de6d-49b1-b102-383a1814fab7] Running
	I1025 08:31:36.849386    4872 system_pods.go:89] "kindnet-rb4dc" [ff4c343e-ba3a-4ceb-adc7-a42c595072c7] Running
	I1025 08:31:36.849398    4872 system_pods.go:89] "kube-apiserver-addons-468341" [29bef051-6bb7-49fc-8e31-25ffc0ace270] Running
	I1025 08:31:36.849403    4872 system_pods.go:89] "kube-controller-manager-addons-468341" [b34bd6e5-2d40-4ad2-a111-ee861c618f57] Running
	I1025 08:31:36.849408    4872 system_pods.go:89] "kube-ingress-dns-minikube" [fef98f06-32c6-44e6-8a25-dce9feb2bc80] Pending
	I1025 08:31:36.849417    4872 system_pods.go:89] "kube-proxy-58zqr" [3d51ef2f-f60c-41f7-a794-69cb67431709] Running
	I1025 08:31:36.849421    4872 system_pods.go:89] "kube-scheduler-addons-468341" [890a9dc4-f6dc-4545-a4bb-15356976c393] Running
	I1025 08:31:36.849440    4872 system_pods.go:89] "metrics-server-85b7d694d7-rqmn4" [1709dfd1-357e-496a-98b7-205be9cae357] Pending
	I1025 08:31:36.849447    4872 system_pods.go:89] "nvidia-device-plugin-daemonset-w5ht9" [05248aa9-d292-4130-b10d-c632220baebb] Pending
	I1025 08:31:36.849451    4872 system_pods.go:89] "registry-6b586f9694-bl9lz" [1d570e3f-1a7f-47f7-9a56-92f7a27efe03] Pending
	I1025 08:31:36.849457    4872 system_pods.go:89] "registry-creds-764b6fb674-q5vpt" [0ac9c422-007f-4643-8aaa-fa94a38fc826] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1025 08:31:36.849461    4872 system_pods.go:89] "registry-proxy-xjrqf" [8a4de784-ff9c-48be-a85b-956955a98f06] Pending
	I1025 08:31:36.849480    4872 system_pods.go:89] "snapshot-controller-7d9fbc56b8-brfpz" [b6abd38b-f9b9-445e-b732-967716b4219d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 08:31:36.849493    4872 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kwsbg" [9f0cc055-bbc4-44b1-b9dd-41670fa5d058] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 08:31:36.849500    4872 system_pods.go:89] "storage-provisioner" [18b717a7-f9d4-4696-9839-6564fcdc4480] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 08:31:36.849519    4872 retry.go:31] will retry after 377.583974ms: missing components: kube-dns
	I1025 08:31:36.849610    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:36.893747    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:37.125374    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:37.252921    4872 system_pods.go:86] 19 kube-system pods found
	I1025 08:31:37.252957    4872 system_pods.go:89] "coredns-66bc5c9577-dh6v4" [a83a218d-bfcd-4174-955e-eeb9264cb12f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 08:31:37.252995    4872 system_pods.go:89] "csi-hostpath-attacher-0" [adca97a9-465e-4053-8da2-1647455bd10d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1025 08:31:37.253011    4872 system_pods.go:89] "csi-hostpath-resizer-0" [7efafc69-47a7-4d91-9d6e-660223b9207b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1025 08:31:37.253019    4872 system_pods.go:89] "csi-hostpathplugin-wm2b7" [829d1b6b-726d-4eb4-b18e-0e6b86c1755d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1025 08:31:37.253028    4872 system_pods.go:89] "etcd-addons-468341" [9f5fd6e8-de6d-49b1-b102-383a1814fab7] Running
	I1025 08:31:37.253034    4872 system_pods.go:89] "kindnet-rb4dc" [ff4c343e-ba3a-4ceb-adc7-a42c595072c7] Running
	I1025 08:31:37.253039    4872 system_pods.go:89] "kube-apiserver-addons-468341" [29bef051-6bb7-49fc-8e31-25ffc0ace270] Running
	I1025 08:31:37.253044    4872 system_pods.go:89] "kube-controller-manager-addons-468341" [b34bd6e5-2d40-4ad2-a111-ee861c618f57] Running
	I1025 08:31:37.253066    4872 system_pods.go:89] "kube-ingress-dns-minikube" [fef98f06-32c6-44e6-8a25-dce9feb2bc80] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1025 08:31:37.253077    4872 system_pods.go:89] "kube-proxy-58zqr" [3d51ef2f-f60c-41f7-a794-69cb67431709] Running
	I1025 08:31:37.253082    4872 system_pods.go:89] "kube-scheduler-addons-468341" [890a9dc4-f6dc-4545-a4bb-15356976c393] Running
	I1025 08:31:37.253088    4872 system_pods.go:89] "metrics-server-85b7d694d7-rqmn4" [1709dfd1-357e-496a-98b7-205be9cae357] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 08:31:37.253098    4872 system_pods.go:89] "nvidia-device-plugin-daemonset-w5ht9" [05248aa9-d292-4130-b10d-c632220baebb] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1025 08:31:37.253119    4872 system_pods.go:89] "registry-6b586f9694-bl9lz" [1d570e3f-1a7f-47f7-9a56-92f7a27efe03] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1025 08:31:37.253126    4872 system_pods.go:89] "registry-creds-764b6fb674-q5vpt" [0ac9c422-007f-4643-8aaa-fa94a38fc826] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1025 08:31:37.253159    4872 system_pods.go:89] "registry-proxy-xjrqf" [8a4de784-ff9c-48be-a85b-956955a98f06] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1025 08:31:37.253172    4872 system_pods.go:89] "snapshot-controller-7d9fbc56b8-brfpz" [b6abd38b-f9b9-445e-b732-967716b4219d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 08:31:37.253180    4872 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kwsbg" [9f0cc055-bbc4-44b1-b9dd-41670fa5d058] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 08:31:37.253195    4872 system_pods.go:89] "storage-provisioner" [18b717a7-f9d4-4696-9839-6564fcdc4480] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 08:31:37.253207    4872 system_pods.go:126] duration metric: took 773.676098ms to wait for k8s-apps to be running ...
	I1025 08:31:37.253216    4872 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 08:31:37.253285    4872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 08:31:37.278350    4872 system_svc.go:56] duration metric: took 25.123686ms WaitForService to wait for kubelet
	I1025 08:31:37.278376    4872 kubeadm.go:586] duration metric: took 42.532926727s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 08:31:37.278396    4872 node_conditions.go:102] verifying NodePressure condition ...
	I1025 08:31:37.300522    4872 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 08:31:37.300562    4872 node_conditions.go:123] node cpu capacity is 2
	I1025 08:31:37.300574    4872 node_conditions.go:105] duration metric: took 22.172858ms to run NodePressure ...
	I1025 08:31:37.300606    4872 start.go:241] waiting for startup goroutines ...
	I1025 08:31:37.353539    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:37.354058    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:37.458109    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:37.617503    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:37.841918    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:37.849776    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:37.891968    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:38.118759    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:38.341832    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:38.344393    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:38.388763    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:38.617531    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:38.837200    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:38.839445    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:38.888853    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:39.117305    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:39.338758    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:39.339060    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:39.389108    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:39.436293    4872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:31:39.617033    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:39.837308    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:39.843162    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:39.888013    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:40.117358    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:40.337138    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:40.339681    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:40.388139    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:40.586237    4872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.149907989s)
	W1025 08:31:40.586283    4872 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:40.586311    4872 retry.go:31] will retry after 17.295065096s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:40.618857    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:40.836940    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:40.839469    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:40.889093    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:41.117886    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:41.337107    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:41.339245    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:41.388337    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:41.617691    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:41.837797    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:41.838736    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:41.888972    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:42.118061    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:42.339318    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:42.349443    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:42.389258    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:42.617972    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:42.837114    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:42.838227    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:42.888386    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:43.116696    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:43.337514    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:43.339696    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:43.389140    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:43.618679    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:43.838480    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:43.840464    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:43.888682    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:44.117917    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:44.339979    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:44.340359    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:44.388589    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:44.617017    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:44.837138    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:44.839545    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:44.888443    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:45.118553    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:45.354121    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:45.354329    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:45.390929    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:45.617363    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:45.837884    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:45.839344    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:45.889880    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:46.117019    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:46.337411    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:46.340137    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:46.388905    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:46.616931    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:46.836651    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:46.839391    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:46.889354    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:47.117559    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:47.336506    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:47.338986    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:47.388845    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:47.617493    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:47.837474    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:47.839247    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:47.889659    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:48.117177    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:48.337182    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:48.339932    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:48.389754    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:48.617537    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:48.840010    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:48.841194    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:48.889423    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:49.118336    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:49.337788    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:49.338625    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:49.389481    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:49.617738    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:49.837201    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:49.840126    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:49.893546    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:50.117885    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:50.338240    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:50.339775    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:50.389090    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:50.616858    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:50.839330    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:50.839636    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:50.937610    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:51.117959    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:51.338829    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:51.339796    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:51.389141    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:51.618089    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:51.837611    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:51.840557    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:51.898464    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:52.118103    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:52.337608    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:52.339338    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:52.388633    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:52.617455    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:52.839389    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:52.839826    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:52.892607    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:53.117140    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:53.337610    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:53.339711    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:53.388736    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:53.617567    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:53.853248    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:53.853661    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:53.893349    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:54.117521    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:54.338959    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:54.339169    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:54.388036    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:54.617284    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:54.837831    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:54.839238    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:54.891217    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:55.118774    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:55.338296    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:55.338924    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:55.389598    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:55.618147    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:55.837468    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:55.838163    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:55.888228    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:56.118065    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:56.342668    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:56.343210    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:56.440310    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:56.618292    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:56.839063    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:56.840180    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:56.888382    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:57.117479    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:57.339704    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:57.340358    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:57.388508    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:57.621086    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:57.840141    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:57.840617    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:57.881917    4872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:31:57.888315    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:58.117263    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:58.339089    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:58.339520    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:58.439810    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:58.619334    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:58.839832    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:58.840308    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:58.888292    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:59.118873    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:59.183919    4872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.301922043s)
	W1025 08:31:59.183950    4872 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:59.183967    4872 retry.go:31] will retry after 12.216152943s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:31:59.339204    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:59.339830    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:59.389522    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:31:59.617581    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:31:59.836733    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:31:59.839182    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:31:59.888329    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:00.135833    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:00.355791    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:00.356009    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:32:00.400517    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:00.617756    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:00.836661    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:32:00.838792    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:00.888519    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:01.117498    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:01.339395    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:32:01.339901    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:01.389071    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:01.619734    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:01.836819    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:32:01.839817    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:01.888723    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:02.117622    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:02.339584    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:02.340865    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:32:02.388839    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:02.617824    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:02.838581    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:32:02.839520    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:02.888751    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:03.117277    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:03.338551    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:32:03.338990    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:03.389430    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:03.617508    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:03.840072    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:32:03.840545    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:03.888581    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:04.117484    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:04.336825    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:32:04.339846    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:04.388941    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:04.618254    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:04.838798    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:32:04.839293    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:04.890046    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:05.118009    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:05.339614    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:32:05.340208    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:05.388430    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:05.619342    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:05.839804    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:32:05.840149    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:05.889793    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:06.118033    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:06.339534    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:06.339546    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:32:06.388775    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:06.624677    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:06.838588    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:32:06.842459    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:06.890095    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:07.121130    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:07.341274    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:07.341655    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:32:07.390229    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:07.618570    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:07.837307    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:32:07.840060    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:07.889550    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:08.117171    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:08.339183    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:32:08.339255    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:08.439731    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:08.619659    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:08.840759    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:32:08.844751    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:08.890624    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:09.118615    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:09.339987    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:09.341054    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:32:09.390426    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:09.620043    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:09.845345    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:32:09.845759    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:09.891451    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:10.122308    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:10.340078    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:10.340423    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:32:10.388804    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:10.617739    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:10.837059    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:32:10.839735    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:10.888205    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:11.118498    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:11.336959    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:32:11.339546    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:11.390472    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:11.400809    4872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 08:32:11.618491    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:11.843967    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 08:32:11.844388    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:11.943618    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:12.128608    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 08:32:12.314640    4872 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:32:12.314672    4872 retry.go:31] will retry after 41.515236646s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 08:32:12.338562    4872 kapi.go:107] duration metric: took 1m11.004812987s to wait for kubernetes.io/minikube-addons=registry ...
	I1025 08:32:12.339136    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:12.388281    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:12.617956    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:12.838915    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:12.889311    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:13.117764    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:13.339482    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:13.389482    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:13.618295    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:13.839487    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:13.888336    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:14.117386    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:14.339048    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:14.389157    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:14.617411    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:14.840684    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:14.888119    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:15.117203    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:15.342753    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:15.390614    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:15.617079    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:15.839880    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:15.889190    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:16.120421    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:16.339218    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:16.388934    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:16.617487    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:16.838945    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:16.889296    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:17.118084    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:17.339911    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:17.439966    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:17.617057    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:17.839964    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:17.889227    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:18.117569    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:18.339158    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:18.389451    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:18.618913    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:18.839821    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:18.890628    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:19.117875    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:19.339621    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:19.389083    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:19.617842    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:19.839679    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:19.888979    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:20.118007    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:20.339607    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:20.391004    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:20.618099    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:20.839739    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:20.888768    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:21.117782    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:21.339033    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:21.389139    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:21.620318    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:21.839907    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:21.888798    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:22.117948    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:22.340389    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:22.388756    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:22.617544    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:22.840220    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:22.888249    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:23.117190    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:23.338462    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:23.388487    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:23.617257    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:23.839078    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:23.889305    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:24.119163    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:24.340172    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:24.388644    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:24.619499    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:24.840499    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:24.890486    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 08:32:25.117970    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:25.340372    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:25.393603    4872 kapi.go:107] duration metric: took 1m20.508374791s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1025 08:32:25.398083    4872 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-468341 cluster.
	I1025 08:32:25.402089    4872 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1025 08:32:25.406096    4872 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1025 08:32:25.617743    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:25.839713    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:26.117598    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:26.338472    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:26.617188    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:26.838242    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:27.116585    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:27.339421    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:27.616952    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:27.839652    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:28.125394    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:28.339139    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:28.617836    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:28.840759    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:29.117125    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:29.338121    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:29.618048    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:29.841016    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:30.121844    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:30.341217    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:30.617919    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:30.839459    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:31.123597    4872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 08:32:31.339296    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:31.617119    4872 kapi.go:107] duration metric: took 1m30.003578477s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1025 08:32:31.838920    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:32.338856    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:32.838152    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:33.338830    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:33.839599    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:34.338997    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:34.838362    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:35.339055    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:35.839555    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:36.339290    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:36.838724    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:37.339842    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:37.839698    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:38.339668    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:38.839504    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:39.339417    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:39.839282    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:40.339444    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:40.839735    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:41.339480    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:41.839791    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:42.339803    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:42.839797    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:43.339202    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:43.840412    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:44.340304    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:44.838871    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:45.339676    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:45.838707    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:46.339170    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:46.839176    4872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 08:32:47.339607    4872 kapi.go:107] duration metric: took 1m46.004264503s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1025 08:32:53.832692    4872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1025 08:32:54.635487    4872 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1025 08:32:54.635580    4872 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1025 08:32:54.639080    4872 out.go:179] * Enabled addons: nvidia-device-plugin, amd-gpu-device-plugin, ingress-dns, cloud-spanner, storage-provisioner, registry-creds, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I1025 08:32:54.641952    4872 addons.go:514] duration metric: took 1m59.896093667s for enable addons: enabled=[nvidia-device-plugin amd-gpu-device-plugin ingress-dns cloud-spanner storage-provisioner registry-creds metrics-server yakd storage-provisioner-rancher volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I1025 08:32:54.642026    4872 start.go:246] waiting for cluster config update ...
	I1025 08:32:54.642051    4872 start.go:255] writing updated cluster config ...
	I1025 08:32:54.643026    4872 ssh_runner.go:195] Run: rm -f paused
	I1025 08:32:54.647881    4872 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 08:32:54.651424    4872 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dh6v4" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:32:54.656184    4872 pod_ready.go:94] pod "coredns-66bc5c9577-dh6v4" is "Ready"
	I1025 08:32:54.656206    4872 pod_ready.go:86] duration metric: took 4.751135ms for pod "coredns-66bc5c9577-dh6v4" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:32:54.658529    4872 pod_ready.go:83] waiting for pod "etcd-addons-468341" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:32:54.662977    4872 pod_ready.go:94] pod "etcd-addons-468341" is "Ready"
	I1025 08:32:54.663008    4872 pod_ready.go:86] duration metric: took 4.450589ms for pod "etcd-addons-468341" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:32:54.665465    4872 pod_ready.go:83] waiting for pod "kube-apiserver-addons-468341" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:32:54.669962    4872 pod_ready.go:94] pod "kube-apiserver-addons-468341" is "Ready"
	I1025 08:32:54.670019    4872 pod_ready.go:86] duration metric: took 4.53237ms for pod "kube-apiserver-addons-468341" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:32:54.672232    4872 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-468341" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:32:55.051546    4872 pod_ready.go:94] pod "kube-controller-manager-addons-468341" is "Ready"
	I1025 08:32:55.051577    4872 pod_ready.go:86] duration metric: took 379.317529ms for pod "kube-controller-manager-addons-468341" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:32:55.252518    4872 pod_ready.go:83] waiting for pod "kube-proxy-58zqr" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:32:55.662725    4872 pod_ready.go:94] pod "kube-proxy-58zqr" is "Ready"
	I1025 08:32:55.662768    4872 pod_ready.go:86] duration metric: took 410.223671ms for pod "kube-proxy-58zqr" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:32:55.851901    4872 pod_ready.go:83] waiting for pod "kube-scheduler-addons-468341" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:32:56.252030    4872 pod_ready.go:94] pod "kube-scheduler-addons-468341" is "Ready"
	I1025 08:32:56.252118    4872 pod_ready.go:86] duration metric: took 400.19293ms for pod "kube-scheduler-addons-468341" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:32:56.252139    4872 pod_ready.go:40] duration metric: took 1.604221337s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 08:32:56.318873    4872 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1025 08:32:56.322044    4872 out.go:179] * Done! kubectl is now configured to use "addons-468341" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 25 08:33:23 addons-468341 crio[829]: time="2025-10-25T08:33:23.889002218Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 08:33:23 addons-468341 crio[829]: time="2025-10-25T08:33:23.889734749Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 08:33:23 addons-468341 crio[829]: time="2025-10-25T08:33:23.910874103Z" level=info msg="Created container be3690a1ca461fb49f9cb4b960a9ebcff89be13f69522991493b1a7dcb6c7cac: default/test-local-path/busybox" id=829e35b9-9603-4738-a18a-e5d6b0ca150b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 08:33:23 addons-468341 crio[829]: time="2025-10-25T08:33:23.914708714Z" level=info msg="Starting container: be3690a1ca461fb49f9cb4b960a9ebcff89be13f69522991493b1a7dcb6c7cac" id=6a6eab22-3da0-44f8-96fe-437724216ece name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 08:33:23 addons-468341 crio[829]: time="2025-10-25T08:33:23.917281701Z" level=info msg="Started container" PID=5539 containerID=be3690a1ca461fb49f9cb4b960a9ebcff89be13f69522991493b1a7dcb6c7cac description=default/test-local-path/busybox id=6a6eab22-3da0-44f8-96fe-437724216ece name=/runtime.v1.RuntimeService/StartContainer sandboxID=65e50d30cb17091aa693904dd8a9a7592dad5d4b5e2224c70021803ab569ef7a
	Oct 25 08:33:25 addons-468341 crio[829]: time="2025-10-25T08:33:25.443870456Z" level=info msg="Stopping pod sandbox: 65e50d30cb17091aa693904dd8a9a7592dad5d4b5e2224c70021803ab569ef7a" id=a7601640-90ac-487a-8de7-5ccb8b1dc305 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 25 08:33:25 addons-468341 crio[829]: time="2025-10-25T08:33:25.444189537Z" level=info msg="Got pod network &{Name:test-local-path Namespace:default ID:65e50d30cb17091aa693904dd8a9a7592dad5d4b5e2224c70021803ab569ef7a UID:e463ad75-a086-4ad9-a63a-5fe55cc188d1 NetNS:/var/run/netns/c96893eb-71c0-4492-a0cc-6e264a4633f2 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000bcdc8}] Aliases:map[]}"
	Oct 25 08:33:25 addons-468341 crio[829]: time="2025-10-25T08:33:25.444347011Z" level=info msg="Deleting pod default_test-local-path from CNI network \"kindnet\" (type=ptp)"
	Oct 25 08:33:25 addons-468341 crio[829]: time="2025-10-25T08:33:25.4709164Z" level=info msg="Stopped pod sandbox: 65e50d30cb17091aa693904dd8a9a7592dad5d4b5e2224c70021803ab569ef7a" id=a7601640-90ac-487a-8de7-5ccb8b1dc305 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 25 08:33:27 addons-468341 crio[829]: time="2025-10-25T08:33:27.031208215Z" level=info msg="Running pod sandbox: local-path-storage/helper-pod-delete-pvc-e010f192-5941-4327-9df8-ac1fe331714f/POD" id=3183fd43-6099-43fd-bf79-2f78a3b131b8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 08:33:27 addons-468341 crio[829]: time="2025-10-25T08:33:27.031293196Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 08:33:27 addons-468341 crio[829]: time="2025-10-25T08:33:27.064118618Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-e010f192-5941-4327-9df8-ac1fe331714f Namespace:local-path-storage ID:80cd51a3f7aecbdb274b586824eb6ab141959ad8a983bd01b809222a572f9755 UID:62fe8c96-0a99-490b-8740-ce30507d8141 NetNS:/var/run/netns/48b46c54-eb5d-435f-9a4c-0ce49fb4de11 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40009747c0}] Aliases:map[]}"
	Oct 25 08:33:27 addons-468341 crio[829]: time="2025-10-25T08:33:27.064179968Z" level=info msg="Adding pod local-path-storage_helper-pod-delete-pvc-e010f192-5941-4327-9df8-ac1fe331714f to CNI network \"kindnet\" (type=ptp)"
	Oct 25 08:33:27 addons-468341 crio[829]: time="2025-10-25T08:33:27.08120548Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-e010f192-5941-4327-9df8-ac1fe331714f Namespace:local-path-storage ID:80cd51a3f7aecbdb274b586824eb6ab141959ad8a983bd01b809222a572f9755 UID:62fe8c96-0a99-490b-8740-ce30507d8141 NetNS:/var/run/netns/48b46c54-eb5d-435f-9a4c-0ce49fb4de11 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40009747c0}] Aliases:map[]}"
	Oct 25 08:33:27 addons-468341 crio[829]: time="2025-10-25T08:33:27.081431852Z" level=info msg="Checking pod local-path-storage_helper-pod-delete-pvc-e010f192-5941-4327-9df8-ac1fe331714f for CNI network kindnet (type=ptp)"
	Oct 25 08:33:27 addons-468341 crio[829]: time="2025-10-25T08:33:27.085394751Z" level=info msg="Ran pod sandbox 80cd51a3f7aecbdb274b586824eb6ab141959ad8a983bd01b809222a572f9755 with infra container: local-path-storage/helper-pod-delete-pvc-e010f192-5941-4327-9df8-ac1fe331714f/POD" id=3183fd43-6099-43fd-bf79-2f78a3b131b8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 08:33:27 addons-468341 crio[829]: time="2025-10-25T08:33:27.086824063Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=867a3056-8912-47ed-859c-c6e2ce75bdfe name=/runtime.v1.ImageService/ImageStatus
	Oct 25 08:33:27 addons-468341 crio[829]: time="2025-10-25T08:33:27.088958641Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=020f8119-8b09-4a2e-9189-d59f114ffa7e name=/runtime.v1.ImageService/ImageStatus
	Oct 25 08:33:27 addons-468341 crio[829]: time="2025-10-25T08:33:27.095262756Z" level=info msg="Creating container: local-path-storage/helper-pod-delete-pvc-e010f192-5941-4327-9df8-ac1fe331714f/helper-pod" id=d2e76d0b-6c6f-4ed9-baed-67181b8b5c0a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 08:33:27 addons-468341 crio[829]: time="2025-10-25T08:33:27.095386095Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 08:33:27 addons-468341 crio[829]: time="2025-10-25T08:33:27.102641023Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 08:33:27 addons-468341 crio[829]: time="2025-10-25T08:33:27.10440751Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 08:33:27 addons-468341 crio[829]: time="2025-10-25T08:33:27.146170883Z" level=info msg="Created container 1801a54fb0c3d84479103195a1e2875c66e7babb7cf5ec1498dcf61750b78147: local-path-storage/helper-pod-delete-pvc-e010f192-5941-4327-9df8-ac1fe331714f/helper-pod" id=d2e76d0b-6c6f-4ed9-baed-67181b8b5c0a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 08:33:27 addons-468341 crio[829]: time="2025-10-25T08:33:27.148186847Z" level=info msg="Starting container: 1801a54fb0c3d84479103195a1e2875c66e7babb7cf5ec1498dcf61750b78147" id=1601c69b-990f-4aea-99f8-c570c4cd02ba name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 08:33:27 addons-468341 crio[829]: time="2025-10-25T08:33:27.151721954Z" level=info msg="Started container" PID=5692 containerID=1801a54fb0c3d84479103195a1e2875c66e7babb7cf5ec1498dcf61750b78147 description=local-path-storage/helper-pod-delete-pvc-e010f192-5941-4327-9df8-ac1fe331714f/helper-pod id=1601c69b-990f-4aea-99f8-c570c4cd02ba name=/runtime.v1.RuntimeService/StartContainer sandboxID=80cd51a3f7aecbdb274b586824eb6ab141959ad8a983bd01b809222a572f9755
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                                          NAMESPACE
	1801a54fb0c3d       fc9db2894f4e4b8c296b8c9dab7e18a6e78de700d21bc0cfaf5c78484226db9c                                                                             Less than a second ago   Exited              helper-pod                               0                   80cd51a3f7aec       helper-pod-delete-pvc-e010f192-5941-4327-9df8-ac1fe331714f   local-path-storage
	be3690a1ca461       docker.io/library/busybox@sha256:aefc3a378c4cf11a6d85071438d3bf7634633a34c6a68d4c5f928516d556c366                                            4 seconds ago            Exited              busybox                                  0                   65e50d30cb170       test-local-path                                              default
	4be9ef456ac96       docker.io/library/busybox@sha256:1fa89c01cd0473cedbd1a470abb8c139eeb80920edf1bc55de87851bfb63ea11                                            7 seconds ago            Exited              helper-pod                               0                   6b3f6e09c9b5f       helper-pod-create-pvc-e010f192-5941-4327-9df8-ac1fe331714f   local-path-storage
	5caabfc69c47b       gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9                                          8 seconds ago            Exited              registry-test                            0                   2c7c38e333807       registry-test                                                default
	eb2475edef4a2       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          28 seconds ago           Running             busybox                                  0                   b570b05b0f283       busybox                                                      default
	bab10a56ba960       registry.k8s.io/ingress-nginx/controller@sha256:4ae52268a9493fc308d5f2fb67fe657d2499293aa644122d385ddb60c2330dbc                             41 seconds ago           Running             controller                               0                   ae2dc4b6a513f       ingress-nginx-controller-675c5ddd98-xp4f8                    ingress-nginx
	5fd1e4aa2eaec       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          56 seconds ago           Running             csi-snapshotter                          0                   49b310bb5cde1       csi-hostpathplugin-wm2b7                                     kube-system
	5f5e2ff55f9b9       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          58 seconds ago           Running             csi-provisioner                          0                   49b310bb5cde1       csi-hostpathplugin-wm2b7                                     kube-system
	fde287f234591       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            About a minute ago       Running             liveness-probe                           0                   49b310bb5cde1       csi-hostpathplugin-wm2b7                                     kube-system
	1c4a84678a48f       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           About a minute ago       Running             hostpath                                 0                   49b310bb5cde1       csi-hostpathplugin-wm2b7                                     kube-system
	3976f771a2e1f       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                About a minute ago       Running             node-driver-registrar                    0                   49b310bb5cde1       csi-hostpathplugin-wm2b7                                     kube-system
	be7c56ff634b9       9a80c0c8eb61cb88536fa58caaf18357fffd3e9fd0481b2781dfc6359f7654c9                                                                             About a minute ago       Exited              patch                                    2                   2ab1bb1557333       ingress-nginx-admission-patch-jm6m8                          ingress-nginx
	db5d3923d2801       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 About a minute ago       Running             gcp-auth                                 0                   9334c4716c221       gcp-auth-78565c9fb4-gdr72                                    gcp-auth
	2bb61d4b205a8       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:f279436ecca5b88c20fd93c0d2a668ace136058ecad987e96e26014585e335b4                            About a minute ago       Running             gadget                                   0                   e82859ead66ee       gadget-blz29                                                 gadget
	c675c035a6dbb       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   About a minute ago       Running             csi-external-health-monitor-controller   0                   49b310bb5cde1       csi-hostpathplugin-wm2b7                                     kube-system
	f4608f1e20335       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     About a minute ago       Running             nvidia-device-plugin-ctr                 0                   75257ed111f61       nvidia-device-plugin-daemonset-w5ht9                         kube-system
	a45310ef4d134       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   About a minute ago       Exited              create                                   0                   5048757537bea       ingress-nginx-admission-create-wl2lj                         ingress-nginx
	2cb17a3d4c7c6       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           About a minute ago       Running             registry                                 0                   39c01edc5d9b1       registry-6b586f9694-bl9lz                                    kube-system
	53785f6bf53a5       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             About a minute ago       Running             local-path-provisioner                   0                   e6a0cd13d50c3       local-path-provisioner-648f6765c9-52rwl                      local-path-storage
	9dfa9f0508992       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago       Running             metrics-server                           0                   45bf921736c9b       metrics-server-85b7d694d7-rqmn4                              kube-system
	149fe53d8f125       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               About a minute ago       Running             minikube-ingress-dns                     0                   0c7fe301535fb       kube-ingress-dns-minikube                                    kube-system
	c0dd415aff39e       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              About a minute ago       Running             csi-resizer                              0                   9948e9da76c8b       csi-hostpath-resizer-0                                       kube-system
	3f2799b9c1b41       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago       Running             volume-snapshot-controller               0                   dd6d9c046cdb9       snapshot-controller-7d9fbc56b8-brfpz                         kube-system
	d658e0bd52e0d       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             About a minute ago       Running             csi-attacher                             0                   caf5f3599aca0       csi-hostpath-attacher-0                                      kube-system
	6abccf54d473f       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago       Running             volume-snapshot-controller               0                   0c34ed7de6d2d       snapshot-controller-7d9fbc56b8-kwsbg                         kube-system
	710588f58555a       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              About a minute ago       Running             registry-proxy                           0                   6102355ede3b4       registry-proxy-xjrqf                                         kube-system
	66f18df75c44d       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               About a minute ago       Running             cloud-spanner-emulator                   0                   031b4c24d4c4e       cloud-spanner-emulator-86bd5cbb97-tkgt7                      default
	93f2dd786953b       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              About a minute ago       Running             yakd                                     0                   a55c68c16eb83       yakd-dashboard-5ff678cb9-6xxxr                               yakd-dashboard
	990bc617d7987       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago       Running             storage-provisioner                      0                   d2b0670f61288       storage-provisioner                                          kube-system
	3efcb5f51b3c4       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago       Running             coredns                                  0                   f159b8a028368       coredns-66bc5c9577-dh6v4                                     kube-system
	6375e783d80c1       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             2 minutes ago            Running             kube-proxy                               0                   2157f4cd8cfb7       kube-proxy-58zqr                                             kube-system
	11da55b7006a0       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             2 minutes ago            Running             kindnet-cni                              0                   47791e14fc49f       kindnet-rb4dc                                                kube-system
	a8a4b543d2547       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             2 minutes ago            Running             etcd                                     0                   fa4aa28c09db7       etcd-addons-468341                                           kube-system
	f40b4040bdb0c       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             2 minutes ago            Running             kube-scheduler                           0                   96e4c19d3e480       kube-scheduler-addons-468341                                 kube-system
	ca4b0c8b5bb6a       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             2 minutes ago            Running             kube-apiserver                           0                   ecd67cfbd72fa       kube-apiserver-addons-468341                                 kube-system
	2105d8a4af178       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             2 minutes ago            Running             kube-controller-manager                  0                   ad6372860b36c       kube-controller-manager-addons-468341                        kube-system
	
	
	==> coredns [3efcb5f51b3c46b7526ad1ba08ebb8af07f5ac32cc61ce8aa211066d05af5549] <==
	[INFO] 10.244.0.5:53607 - 47810 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001726283s
	[INFO] 10.244.0.5:53607 - 43547 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000121551s
	[INFO] 10.244.0.5:53607 - 64570 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000075429s
	[INFO] 10.244.0.5:40270 - 31308 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000188825s
	[INFO] 10.244.0.5:40270 - 31070 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000254081s
	[INFO] 10.244.0.5:38054 - 58316 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000115061s
	[INFO] 10.244.0.5:38054 - 58497 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000294188s
	[INFO] 10.244.0.5:40217 - 62655 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000085367s
	[INFO] 10.244.0.5:40217 - 62235 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0000672s
	[INFO] 10.244.0.5:47154 - 16316 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001440021s
	[INFO] 10.244.0.5:47154 - 16111 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001542166s
	[INFO] 10.244.0.5:45851 - 13532 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000116784s
	[INFO] 10.244.0.5:45851 - 13111 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000186913s
	[INFO] 10.244.0.20:49885 - 49063 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000160173s
	[INFO] 10.244.0.20:58404 - 54588 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000105371s
	[INFO] 10.244.0.20:50653 - 22797 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000144214s
	[INFO] 10.244.0.20:53434 - 63846 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000084046s
	[INFO] 10.244.0.20:51048 - 20411 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000149285s
	[INFO] 10.244.0.20:55749 - 7642 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000082888s
	[INFO] 10.244.0.20:36920 - 60131 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001965266s
	[INFO] 10.244.0.20:42946 - 24582 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002484938s
	[INFO] 10.244.0.20:59197 - 18586 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00148757s
	[INFO] 10.244.0.20:42203 - 35136 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001190034s
	[INFO] 10.244.0.23:55057 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000181949s
	[INFO] 10.244.0.23:51115 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000126261s
	
	
	==> describe nodes <==
	Name:               addons-468341
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-468341
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3de5ce1075e776fa55a26fe8396669cc53a4373
	                    minikube.k8s.io/name=addons-468341
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T08_30_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-468341
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-468341"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 08:30:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-468341
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 08:33:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 08:33:23 +0000   Sat, 25 Oct 2025 08:30:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 08:33:23 +0000   Sat, 25 Oct 2025 08:30:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 08:33:23 +0000   Sat, 25 Oct 2025 08:30:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 08:33:23 +0000   Sat, 25 Oct 2025 08:31:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-468341
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                5c9900c3-5a2e-4ead-b0e4-60c2e9f9bb56
	  Boot ID:                    89aa6167-e700-462e-8969-7739a9f33dc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  default                     cloud-spanner-emulator-86bd5cbb97-tkgt7                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m30s
	  gadget                      gadget-blz29                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  gcp-auth                    gcp-auth-78565c9fb4-gdr72                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-xp4f8                     100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         2m27s
	  kube-system                 coredns-66bc5c9577-dh6v4                                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m34s
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 csi-hostpathplugin-wm2b7                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 etcd-addons-468341                                            100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m39s
	  kube-system                 kindnet-rb4dc                                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m34s
	  kube-system                 kube-apiserver-addons-468341                                  250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m39s
	  kube-system                 kube-controller-manager-addons-468341                         200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m41s
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m29s
	  kube-system                 kube-proxy-58zqr                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m34s
	  kube-system                 kube-scheduler-addons-468341                                  100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m39s
	  kube-system                 metrics-server-85b7d694d7-rqmn4                               100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         2m29s
	  kube-system                 nvidia-device-plugin-daemonset-w5ht9                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 registry-6b586f9694-bl9lz                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m29s
	  kube-system                 registry-creds-764b6fb674-q5vpt                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m31s
	  kube-system                 registry-proxy-xjrqf                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 snapshot-controller-7d9fbc56b8-brfpz                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 snapshot-controller-7d9fbc56b8-kwsbg                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  local-path-storage          helper-pod-delete-pvc-e010f192-5941-4327-9df8-ac1fe331714f    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  local-path-storage          local-path-provisioner-648f6765c9-52rwl                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-6xxxr                                0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     2m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 2m32s  kube-proxy       
	  Normal   Starting                 2m39s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m39s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m39s  kubelet          Node addons-468341 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m39s  kubelet          Node addons-468341 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m39s  kubelet          Node addons-468341 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m35s  node-controller  Node addons-468341 event: Registered Node addons-468341 in Controller
	  Normal   NodeReady                112s   kubelet          Node addons-468341 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct25 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014683] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497292] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033389] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.792499] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.870372] kauditd_printk_skb: 36 callbacks suppressed
	[Oct25 08:30] overlayfs: idmapped layers are currently not supported
	[  +0.060360] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [a8a4b543d2547f083b802e2c98710bcc2f07776dfc49c8c41bb357abf197d215] <==
	{"level":"warn","ts":"2025-10-25T08:30:45.379234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:45.402064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:45.415059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:45.432877Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:45.457490Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:45.467113Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:45.518277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:45.521635Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:45.526812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:45.542749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:45.558513Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:45.575968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:45.606920Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:45.618784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:45.640679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:45.666265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:45.700420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:45.716837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:30:45.784402Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:31:01.905434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:31:01.922891Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:31:23.706668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:31:23.733672Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:31:23.753642Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:31:23.770266Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51554","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [db5d3923d2801bf564674486be86cb41ac4d433e036ee2ab0b46330f811e3c2c] <==
	2025/10/25 08:32:24 GCP Auth Webhook started!
	2025/10/25 08:32:56 Ready to marshal response ...
	2025/10/25 08:32:56 Ready to write response ...
	2025/10/25 08:32:57 Ready to marshal response ...
	2025/10/25 08:32:57 Ready to write response ...
	2025/10/25 08:32:57 Ready to marshal response ...
	2025/10/25 08:32:57 Ready to write response ...
	2025/10/25 08:33:17 Ready to marshal response ...
	2025/10/25 08:33:17 Ready to write response ...
	2025/10/25 08:33:18 Ready to marshal response ...
	2025/10/25 08:33:18 Ready to write response ...
	2025/10/25 08:33:18 Ready to marshal response ...
	2025/10/25 08:33:18 Ready to write response ...
	2025/10/25 08:33:26 Ready to marshal response ...
	2025/10/25 08:33:26 Ready to write response ...
	
	
	==> kernel <==
	 08:33:28 up 15 min,  0 user,  load average: 1.90, 1.43, 0.61
	Linux addons-468341 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [11da55b7006a0c7ee4fdfa87ea50ea79c464ed85828bc75ef11dbecc5857aaa2] <==
	E1025 08:31:26.825778       1 controller.go:417] "reading nfqueue stats" err="open /proc/net/netfilter/nfnetlink_queue: no such file or directory"
	I1025 08:31:35.623732       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:31:35.623769       1 main.go:301] handling current node
	I1025 08:31:45.620007       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:31:45.620040       1 main.go:301] handling current node
	I1025 08:31:55.618342       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:31:55.618373       1 main.go:301] handling current node
	I1025 08:32:05.622086       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:32:05.622140       1 main.go:301] handling current node
	I1025 08:32:15.618556       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:32:15.618593       1 main.go:301] handling current node
	I1025 08:32:25.617594       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:32:25.617631       1 main.go:301] handling current node
	I1025 08:32:35.618380       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:32:35.618502       1 main.go:301] handling current node
	I1025 08:32:45.618842       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:32:45.618885       1 main.go:301] handling current node
	I1025 08:32:55.618664       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:32:55.618697       1 main.go:301] handling current node
	I1025 08:33:05.620541       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:33:05.620580       1 main.go:301] handling current node
	I1025 08:33:15.626017       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:33:15.626052       1 main.go:301] handling current node
	I1025 08:33:25.618300       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:33:25.618333       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ca4b0c8b5bb6a41514380e8fbc131cfd3c0036a7467eef4ccf7772196d699ebd] <==
	W1025 08:31:36.229415       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.83.226:443: connect: connection refused
	E1025 08:31:36.231497       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.83.226:443: connect: connection refused" logger="UnhandledError"
	W1025 08:31:36.300328       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.83.226:443: connect: connection refused
	E1025 08:31:36.300373       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.83.226:443: connect: connection refused" logger="UnhandledError"
	W1025 08:32:01.340274       1 handler_proxy.go:99] no RequestInfo found in the context
	E1025 08:32:01.340309       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1025 08:32:01.340323       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1025 08:32:01.341417       1 handler_proxy.go:99] no RequestInfo found in the context
	E1025 08:32:01.341495       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1025 08:32:01.341507       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1025 08:32:10.187015       1 handler_proxy.go:99] no RequestInfo found in the context
	E1025 08:32:10.187084       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1025 08:32:10.188340       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.132.206:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.132.206:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.132.206:443: connect: connection refused" logger="UnhandledError"
	E1025 08:32:10.189108       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.132.206:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.132.206:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.132.206:443: connect: connection refused" logger="UnhandledError"
	E1025 08:32:10.195009       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.132.206:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.132.206:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.132.206:443: connect: connection refused" logger="UnhandledError"
	I1025 08:32:10.333638       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1025 08:33:06.314868       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:44472: use of closed network connection
	E1025 08:33:06.533726       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:44484: use of closed network connection
	E1025 08:33:06.669089       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:44504: use of closed network connection
	
	
	==> kube-controller-manager [2105d8a4af178f4195391e5b2d1ca656712cbf36e77394a9382e7c7a4b280b3c] <==
	I1025 08:30:53.713012       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1025 08:30:53.713062       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1025 08:30:53.713280       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1025 08:30:53.714404       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1025 08:30:53.714454       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 08:30:53.714660       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1025 08:30:53.714760       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 08:30:53.714866       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1025 08:30:53.714911       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1025 08:30:53.715298       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1025 08:30:53.715385       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1025 08:30:53.733370       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 08:30:53.733401       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 08:30:53.733409       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E1025 08:30:59.445922       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1025 08:31:23.699645       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1025 08:31:23.699810       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1025 08:31:23.699854       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1025 08:31:23.720778       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1025 08:31:23.726586       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1025 08:31:23.800516       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 08:31:23.826939       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 08:31:38.672612       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1025 08:31:53.806130       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1025 08:31:53.840023       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [6375e783d80c1ab19c196dea15de187c31ecf6231a480e3b354abf63f9986443] <==
	I1025 08:30:55.684549       1 server_linux.go:53] "Using iptables proxy"
	I1025 08:30:55.791775       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 08:30:55.892624       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 08:30:55.892708       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1025 08:30:55.892793       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 08:30:55.947398       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 08:30:55.947452       1 server_linux.go:132] "Using iptables Proxier"
	I1025 08:30:55.953607       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 08:30:55.965462       1 server.go:527] "Version info" version="v1.34.1"
	I1025 08:30:55.965490       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 08:30:55.966975       1 config.go:106] "Starting endpoint slice config controller"
	I1025 08:30:55.967000       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 08:30:55.967361       1 config.go:200] "Starting service config controller"
	I1025 08:30:55.967378       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 08:30:55.967721       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 08:30:55.967735       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 08:30:55.968166       1 config.go:309] "Starting node config controller"
	I1025 08:30:55.968172       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 08:30:55.968178       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 08:30:56.070548       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 08:30:56.070633       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 08:30:56.070955       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [f40b4040bdb0ca8f6ba539c3a1cc4570ad8fc2d470fe2a3ee7023680adca8179] <==
	E1025 08:30:46.831568       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 08:30:46.831740       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1025 08:30:46.831869       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 08:30:46.831871       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 08:30:46.831923       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1025 08:30:46.831973       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 08:30:46.832022       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 08:30:46.832068       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 08:30:46.832157       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 08:30:46.832169       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 08:30:46.832228       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 08:30:46.832277       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 08:30:46.832319       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 08:30:46.832356       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1025 08:30:46.832398       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 08:30:46.832474       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 08:30:46.832568       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 08:30:46.834166       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 08:30:47.637677       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 08:30:47.640113       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 08:30:47.684573       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1025 08:30:47.709377       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 08:30:47.797381       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 08:30:47.890598       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	I1025 08:30:49.812317       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 08:33:25 addons-468341 kubelet[1291]: I1025 08:33:25.590562    1291 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e463ad75-a086-4ad9-a63a-5fe55cc188d1-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "e463ad75-a086-4ad9-a63a-5fe55cc188d1" (UID: "e463ad75-a086-4ad9-a63a-5fe55cc188d1"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Oct 25 08:33:25 addons-468341 kubelet[1291]: I1025 08:33:25.592486    1291 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e463ad75-a086-4ad9-a63a-5fe55cc188d1-kube-api-access-w4qbx" (OuterVolumeSpecName: "kube-api-access-w4qbx") pod "e463ad75-a086-4ad9-a63a-5fe55cc188d1" (UID: "e463ad75-a086-4ad9-a63a-5fe55cc188d1"). InnerVolumeSpecName "kube-api-access-w4qbx". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 25 08:33:25 addons-468341 kubelet[1291]: I1025 08:33:25.690660    1291 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w4qbx\" (UniqueName: \"kubernetes.io/projected/e463ad75-a086-4ad9-a63a-5fe55cc188d1-kube-api-access-w4qbx\") on node \"addons-468341\" DevicePath \"\""
	Oct 25 08:33:25 addons-468341 kubelet[1291]: I1025 08:33:25.690702    1291 reconciler_common.go:299] "Volume detached for volume \"pvc-e010f192-5941-4327-9df8-ac1fe331714f\" (UniqueName: \"kubernetes.io/host-path/e463ad75-a086-4ad9-a63a-5fe55cc188d1-pvc-e010f192-5941-4327-9df8-ac1fe331714f\") on node \"addons-468341\" DevicePath \"\""
	Oct 25 08:33:25 addons-468341 kubelet[1291]: I1025 08:33:25.690716    1291 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/e463ad75-a086-4ad9-a63a-5fe55cc188d1-gcp-creds\") on node \"addons-468341\" DevicePath \"\""
	Oct 25 08:33:26 addons-468341 kubelet[1291]: I1025 08:33:26.467478    1291 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65e50d30cb17091aa693904dd8a9a7592dad5d4b5e2224c70021803ab569ef7a"
	Oct 25 08:33:26 addons-468341 kubelet[1291]: I1025 08:33:26.800071    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/62fe8c96-0a99-490b-8740-ce30507d8141-data\") pod \"helper-pod-delete-pvc-e010f192-5941-4327-9df8-ac1fe331714f\" (UID: \"62fe8c96-0a99-490b-8740-ce30507d8141\") " pod="local-path-storage/helper-pod-delete-pvc-e010f192-5941-4327-9df8-ac1fe331714f"
	Oct 25 08:33:26 addons-468341 kubelet[1291]: I1025 08:33:26.800140    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-728dz\" (UniqueName: \"kubernetes.io/projected/62fe8c96-0a99-490b-8740-ce30507d8141-kube-api-access-728dz\") pod \"helper-pod-delete-pvc-e010f192-5941-4327-9df8-ac1fe331714f\" (UID: \"62fe8c96-0a99-490b-8740-ce30507d8141\") " pod="local-path-storage/helper-pod-delete-pvc-e010f192-5941-4327-9df8-ac1fe331714f"
	Oct 25 08:33:26 addons-468341 kubelet[1291]: I1025 08:33:26.800182    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/62fe8c96-0a99-490b-8740-ce30507d8141-gcp-creds\") pod \"helper-pod-delete-pvc-e010f192-5941-4327-9df8-ac1fe331714f\" (UID: \"62fe8c96-0a99-490b-8740-ce30507d8141\") " pod="local-path-storage/helper-pod-delete-pvc-e010f192-5941-4327-9df8-ac1fe331714f"
	Oct 25 08:33:26 addons-468341 kubelet[1291]: I1025 08:33:26.800223    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/62fe8c96-0a99-490b-8740-ce30507d8141-script\") pod \"helper-pod-delete-pvc-e010f192-5941-4327-9df8-ac1fe331714f\" (UID: \"62fe8c96-0a99-490b-8740-ce30507d8141\") " pod="local-path-storage/helper-pod-delete-pvc-e010f192-5941-4327-9df8-ac1fe331714f"
	Oct 25 08:33:27 addons-468341 kubelet[1291]: W1025 08:33:27.083793    1291 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/921bcbb16e37f18586dfcad891fd4e6c424ad8acf17dde69218b365d67adf111/crio-80cd51a3f7aecbdb274b586824eb6ab141959ad8a983bd01b809222a572f9755 WatchSource:0}: Error finding container 80cd51a3f7aecbdb274b586824eb6ab141959ad8a983bd01b809222a572f9755: Status 404 returned error can't find the container with id 80cd51a3f7aecbdb274b586824eb6ab141959ad8a983bd01b809222a572f9755
	Oct 25 08:33:27 addons-468341 kubelet[1291]: I1025 08:33:27.543177    1291 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-w5ht9" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 08:33:27 addons-468341 kubelet[1291]: I1025 08:33:27.546608    1291 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e463ad75-a086-4ad9-a63a-5fe55cc188d1" path="/var/lib/kubelet/pods/e463ad75-a086-4ad9-a63a-5fe55cc188d1/volumes"
	Oct 25 08:33:28 addons-468341 kubelet[1291]: I1025 08:33:28.619038    1291 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/62fe8c96-0a99-490b-8740-ce30507d8141-gcp-creds\") pod \"62fe8c96-0a99-490b-8740-ce30507d8141\" (UID: \"62fe8c96-0a99-490b-8740-ce30507d8141\") "
	Oct 25 08:33:28 addons-468341 kubelet[1291]: I1025 08:33:28.619103    1291 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-728dz\" (UniqueName: \"kubernetes.io/projected/62fe8c96-0a99-490b-8740-ce30507d8141-kube-api-access-728dz\") pod \"62fe8c96-0a99-490b-8740-ce30507d8141\" (UID: \"62fe8c96-0a99-490b-8740-ce30507d8141\") "
	Oct 25 08:33:28 addons-468341 kubelet[1291]: I1025 08:33:28.619136    1291 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/62fe8c96-0a99-490b-8740-ce30507d8141-script\") pod \"62fe8c96-0a99-490b-8740-ce30507d8141\" (UID: \"62fe8c96-0a99-490b-8740-ce30507d8141\") "
	Oct 25 08:33:28 addons-468341 kubelet[1291]: I1025 08:33:28.619232    1291 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/62fe8c96-0a99-490b-8740-ce30507d8141-data\") pod \"62fe8c96-0a99-490b-8740-ce30507d8141\" (UID: \"62fe8c96-0a99-490b-8740-ce30507d8141\") "
	Oct 25 08:33:28 addons-468341 kubelet[1291]: I1025 08:33:28.619399    1291 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62fe8c96-0a99-490b-8740-ce30507d8141-data" (OuterVolumeSpecName: "data") pod "62fe8c96-0a99-490b-8740-ce30507d8141" (UID: "62fe8c96-0a99-490b-8740-ce30507d8141"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Oct 25 08:33:28 addons-468341 kubelet[1291]: I1025 08:33:28.619431    1291 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62fe8c96-0a99-490b-8740-ce30507d8141-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "62fe8c96-0a99-490b-8740-ce30507d8141" (UID: "62fe8c96-0a99-490b-8740-ce30507d8141"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Oct 25 08:33:28 addons-468341 kubelet[1291]: I1025 08:33:28.619947    1291 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62fe8c96-0a99-490b-8740-ce30507d8141-script" (OuterVolumeSpecName: "script") pod "62fe8c96-0a99-490b-8740-ce30507d8141" (UID: "62fe8c96-0a99-490b-8740-ce30507d8141"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Oct 25 08:33:28 addons-468341 kubelet[1291]: I1025 08:33:28.621762    1291 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62fe8c96-0a99-490b-8740-ce30507d8141-kube-api-access-728dz" (OuterVolumeSpecName: "kube-api-access-728dz") pod "62fe8c96-0a99-490b-8740-ce30507d8141" (UID: "62fe8c96-0a99-490b-8740-ce30507d8141"). InnerVolumeSpecName "kube-api-access-728dz". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 25 08:33:28 addons-468341 kubelet[1291]: I1025 08:33:28.720325    1291 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/62fe8c96-0a99-490b-8740-ce30507d8141-data\") on node \"addons-468341\" DevicePath \"\""
	Oct 25 08:33:28 addons-468341 kubelet[1291]: I1025 08:33:28.720416    1291 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/62fe8c96-0a99-490b-8740-ce30507d8141-gcp-creds\") on node \"addons-468341\" DevicePath \"\""
	Oct 25 08:33:28 addons-468341 kubelet[1291]: I1025 08:33:28.720445    1291 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-728dz\" (UniqueName: \"kubernetes.io/projected/62fe8c96-0a99-490b-8740-ce30507d8141-kube-api-access-728dz\") on node \"addons-468341\" DevicePath \"\""
	Oct 25 08:33:28 addons-468341 kubelet[1291]: I1025 08:33:28.720459    1291 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/62fe8c96-0a99-490b-8740-ce30507d8141-script\") on node \"addons-468341\" DevicePath \"\""
	
	
	==> storage-provisioner [990bc617d7987b2c3713ad5fcb4652f9f7697f9f8286262c1d0fbd61f03cb291] <==
	W1025 08:33:03.940846       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:33:05.944050       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:33:05.948869       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:33:07.952171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:33:07.956689       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:33:09.959925       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:33:09.964882       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:33:11.967972       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:33:11.972509       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:33:13.975580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:33:13.980110       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:33:15.982986       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:33:15.987472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:33:17.991482       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:33:17.996838       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:33:20.001127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:33:20.012362       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:33:22.015786       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:33:22.022824       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:33:24.026515       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:33:24.031789       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:33:26.034731       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:33:26.043151       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:33:28.047794       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:33:28.053651       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-468341 -n addons-468341
helpers_test.go:269: (dbg) Run:  kubectl --context addons-468341 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-wl2lj ingress-nginx-admission-patch-jm6m8 registry-creds-764b6fb674-q5vpt
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-468341 describe pod ingress-nginx-admission-create-wl2lj ingress-nginx-admission-patch-jm6m8 registry-creds-764b6fb674-q5vpt
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-468341 describe pod ingress-nginx-admission-create-wl2lj ingress-nginx-admission-patch-jm6m8 registry-creds-764b6fb674-q5vpt: exit status 1 (104.893406ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-wl2lj" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-jm6m8" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-q5vpt" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-468341 describe pod ingress-nginx-admission-create-wl2lj ingress-nginx-admission-patch-jm6m8 registry-creds-764b6fb674-q5vpt: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-468341 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-468341 addons disable headlamp --alsologtostderr -v=1: exit status 11 (285.542758ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 08:33:29.612826   12405 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:33:29.613069   12405 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:33:29.613083   12405 out.go:374] Setting ErrFile to fd 2...
	I1025 08:33:29.613088   12405 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:33:29.613390   12405 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-2312/.minikube/bin
	I1025 08:33:29.613723   12405 mustload.go:65] Loading cluster: addons-468341
	I1025 08:33:29.614190   12405 config.go:182] Loaded profile config "addons-468341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:33:29.614212   12405 addons.go:606] checking whether the cluster is paused
	I1025 08:33:29.614351   12405 config.go:182] Loaded profile config "addons-468341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:33:29.614368   12405 host.go:66] Checking if "addons-468341" exists ...
	I1025 08:33:29.614881   12405 cli_runner.go:164] Run: docker container inspect addons-468341 --format={{.State.Status}}
	I1025 08:33:29.633520   12405 ssh_runner.go:195] Run: systemctl --version
	I1025 08:33:29.633581   12405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:33:29.661076   12405 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/addons-468341/id_rsa Username:docker}
	I1025 08:33:29.764807   12405 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 08:33:29.764888   12405 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 08:33:29.809244   12405 cri.go:89] found id: "5fd1e4aa2eaec9a625e731b2310d7e53df6715992458f7b34f24c564a0371c93"
	I1025 08:33:29.809284   12405 cri.go:89] found id: "5f5e2ff55f9b9d96f5647e7d06d29ba2d01ac53c5f3bebb14c41ac3b15d46dbe"
	I1025 08:33:29.809290   12405 cri.go:89] found id: "fde287f23459127cc0438aff73855d690ce348105c0984d314227ebb893c0f27"
	I1025 08:33:29.809294   12405 cri.go:89] found id: "1c4a84678a48fe97a0d5e107fffed343bbdb5797c60914bfc1787ef22ec8aab6"
	I1025 08:33:29.809297   12405 cri.go:89] found id: "3976f771a2e1f4333179d9b66178f2b8d0e5df890f3e3934fa6b45cecc30471a"
	I1025 08:33:29.809301   12405 cri.go:89] found id: "c675c035a6dbbaf595ca82e7e5494c65d80846fa4403b093199b14027e8a1667"
	I1025 08:33:29.809304   12405 cri.go:89] found id: "f4608f1e203354a663895813be5940b353d5999288ba102363d0fc2ab5bee267"
	I1025 08:33:29.809306   12405 cri.go:89] found id: "2cb17a3d4c7c6272eb0c5b9a7105721c94d5c62089aae769495e41fd49a9130b"
	I1025 08:33:29.809310   12405 cri.go:89] found id: "9dfa9f05089922f4312c61cbba94162087a4b0ccb02343ec9524bd69b39eec86"
	I1025 08:33:29.809316   12405 cri.go:89] found id: "149fe53d8f125d10a2d1e5e32c97a97f55639e392cf1a6745398ed8ac79e7d66"
	I1025 08:33:29.809319   12405 cri.go:89] found id: "c0dd415aff39e64402a3d64867549a7a3b05dfcf210f7e1f94409f91384fa997"
	I1025 08:33:29.809322   12405 cri.go:89] found id: "3f2799b9c1b41eaec314d09a1269f3e858b9982860af8c4168ed54a9280b011d"
	I1025 08:33:29.809325   12405 cri.go:89] found id: "d658e0bd52e0d32446762f4e3466c7f28b2e31c7a876c4caaddd5444b238f373"
	I1025 08:33:29.809328   12405 cri.go:89] found id: "6abccf54d473ff21bf8b7e7a67510c44ab418b13e2ded840ecd714d35bc33050"
	I1025 08:33:29.809331   12405 cri.go:89] found id: "710588f58555a870efb846b35789c67ba50b0c47c5b86a2c12d2ebe8b7d3cf14"
	I1025 08:33:29.809336   12405 cri.go:89] found id: "990bc617d7987b2c3713ad5fcb4652f9f7697f9f8286262c1d0fbd61f03cb291"
	I1025 08:33:29.809339   12405 cri.go:89] found id: "3efcb5f51b3c46b7526ad1ba08ebb8af07f5ac32cc61ce8aa211066d05af5549"
	I1025 08:33:29.809342   12405 cri.go:89] found id: "6375e783d80c1ab19c196dea15de187c31ecf6231a480e3b354abf63f9986443"
	I1025 08:33:29.809345   12405 cri.go:89] found id: "11da55b7006a0c7ee4fdfa87ea50ea79c464ed85828bc75ef11dbecc5857aaa2"
	I1025 08:33:29.809348   12405 cri.go:89] found id: "a8a4b543d2547f083b802e2c98710bcc2f07776dfc49c8c41bb357abf197d215"
	I1025 08:33:29.809353   12405 cri.go:89] found id: "f40b4040bdb0ca8f6ba539c3a1cc4570ad8fc2d470fe2a3ee7023680adca8179"
	I1025 08:33:29.809356   12405 cri.go:89] found id: "ca4b0c8b5bb6a41514380e8fbc131cfd3c0036a7467eef4ccf7772196d699ebd"
	I1025 08:33:29.809359   12405 cri.go:89] found id: "2105d8a4af178f4195391e5b2d1ca656712cbf36e77394a9382e7c7a4b280b3c"
	I1025 08:33:29.809362   12405 cri.go:89] found id: ""
	I1025 08:33:29.809410   12405 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 08:33:29.825970   12405 out.go:203] 
	W1025 08:33:29.829006   12405 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:33:29Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:33:29Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 08:33:29.829033   12405 out.go:285] * 
	* 
	W1025 08:33:29.832925   12405 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 08:33:29.835966   12405 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-468341 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.54s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.35s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-tkgt7" [16ff193d-4e19-41a9-9fe1-bbcd090d0a61] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004132217s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-468341 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-468341 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (344.013624ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 08:33:26.048160   11721 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:33:26.059900   11721 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:33:26.059958   11721 out.go:374] Setting ErrFile to fd 2...
	I1025 08:33:26.059979   11721 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:33:26.060310   11721 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-2312/.minikube/bin
	I1025 08:33:26.065200   11721 mustload.go:65] Loading cluster: addons-468341
	I1025 08:33:26.068591   11721 config.go:182] Loaded profile config "addons-468341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:33:26.068694   11721 addons.go:606] checking whether the cluster is paused
	I1025 08:33:26.068899   11721 config.go:182] Loaded profile config "addons-468341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:33:26.068937   11721 host.go:66] Checking if "addons-468341" exists ...
	I1025 08:33:26.070625   11721 cli_runner.go:164] Run: docker container inspect addons-468341 --format={{.State.Status}}
	I1025 08:33:26.101102   11721 ssh_runner.go:195] Run: systemctl --version
	I1025 08:33:26.101166   11721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:33:26.124315   11721 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/addons-468341/id_rsa Username:docker}
	I1025 08:33:26.232701   11721 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 08:33:26.232787   11721 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 08:33:26.265207   11721 cri.go:89] found id: "5fd1e4aa2eaec9a625e731b2310d7e53df6715992458f7b34f24c564a0371c93"
	I1025 08:33:26.265231   11721 cri.go:89] found id: "5f5e2ff55f9b9d96f5647e7d06d29ba2d01ac53c5f3bebb14c41ac3b15d46dbe"
	I1025 08:33:26.265236   11721 cri.go:89] found id: "fde287f23459127cc0438aff73855d690ce348105c0984d314227ebb893c0f27"
	I1025 08:33:26.265240   11721 cri.go:89] found id: "1c4a84678a48fe97a0d5e107fffed343bbdb5797c60914bfc1787ef22ec8aab6"
	I1025 08:33:26.265248   11721 cri.go:89] found id: "3976f771a2e1f4333179d9b66178f2b8d0e5df890f3e3934fa6b45cecc30471a"
	I1025 08:33:26.265252   11721 cri.go:89] found id: "c675c035a6dbbaf595ca82e7e5494c65d80846fa4403b093199b14027e8a1667"
	I1025 08:33:26.265255   11721 cri.go:89] found id: "f4608f1e203354a663895813be5940b353d5999288ba102363d0fc2ab5bee267"
	I1025 08:33:26.265258   11721 cri.go:89] found id: "2cb17a3d4c7c6272eb0c5b9a7105721c94d5c62089aae769495e41fd49a9130b"
	I1025 08:33:26.265261   11721 cri.go:89] found id: "9dfa9f05089922f4312c61cbba94162087a4b0ccb02343ec9524bd69b39eec86"
	I1025 08:33:26.265267   11721 cri.go:89] found id: "149fe53d8f125d10a2d1e5e32c97a97f55639e392cf1a6745398ed8ac79e7d66"
	I1025 08:33:26.265270   11721 cri.go:89] found id: "c0dd415aff39e64402a3d64867549a7a3b05dfcf210f7e1f94409f91384fa997"
	I1025 08:33:26.265273   11721 cri.go:89] found id: "3f2799b9c1b41eaec314d09a1269f3e858b9982860af8c4168ed54a9280b011d"
	I1025 08:33:26.265277   11721 cri.go:89] found id: "d658e0bd52e0d32446762f4e3466c7f28b2e31c7a876c4caaddd5444b238f373"
	I1025 08:33:26.265280   11721 cri.go:89] found id: "6abccf54d473ff21bf8b7e7a67510c44ab418b13e2ded840ecd714d35bc33050"
	I1025 08:33:26.265283   11721 cri.go:89] found id: "710588f58555a870efb846b35789c67ba50b0c47c5b86a2c12d2ebe8b7d3cf14"
	I1025 08:33:26.265288   11721 cri.go:89] found id: "990bc617d7987b2c3713ad5fcb4652f9f7697f9f8286262c1d0fbd61f03cb291"
	I1025 08:33:26.265318   11721 cri.go:89] found id: "3efcb5f51b3c46b7526ad1ba08ebb8af07f5ac32cc61ce8aa211066d05af5549"
	I1025 08:33:26.265323   11721 cri.go:89] found id: "6375e783d80c1ab19c196dea15de187c31ecf6231a480e3b354abf63f9986443"
	I1025 08:33:26.265326   11721 cri.go:89] found id: "11da55b7006a0c7ee4fdfa87ea50ea79c464ed85828bc75ef11dbecc5857aaa2"
	I1025 08:33:26.265329   11721 cri.go:89] found id: "a8a4b543d2547f083b802e2c98710bcc2f07776dfc49c8c41bb357abf197d215"
	I1025 08:33:26.265334   11721 cri.go:89] found id: "f40b4040bdb0ca8f6ba539c3a1cc4570ad8fc2d470fe2a3ee7023680adca8179"
	I1025 08:33:26.265337   11721 cri.go:89] found id: "ca4b0c8b5bb6a41514380e8fbc131cfd3c0036a7467eef4ccf7772196d699ebd"
	I1025 08:33:26.265340   11721 cri.go:89] found id: "2105d8a4af178f4195391e5b2d1ca656712cbf36e77394a9382e7c7a4b280b3c"
	I1025 08:33:26.265344   11721 cri.go:89] found id: ""
	I1025 08:33:26.265405   11721 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 08:33:26.282902   11721 out.go:203] 
	W1025 08:33:26.285729   11721 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:33:26Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:33:26Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 08:33:26.285753   11721 out.go:285] * 
	* 
	W1025 08:33:26.294550   11721 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 08:33:26.297390   11721 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-468341 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.35s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.69s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-468341 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-468341 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468341 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468341 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468341 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468341 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468341 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [e463ad75-a086-4ad9-a63a-5fe55cc188d1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [e463ad75-a086-4ad9-a63a-5fe55cc188d1] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [e463ad75-a086-4ad9-a63a-5fe55cc188d1] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.00323866s
addons_test.go:967: (dbg) Run:  kubectl --context addons-468341 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-468341 ssh "cat /opt/local-path-provisioner/pvc-e010f192-5941-4327-9df8-ac1fe331714f_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-468341 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-468341 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-468341 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-468341 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (403.249255ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 08:33:26.830482   11907 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:33:26.830668   11907 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:33:26.830680   11907 out.go:374] Setting ErrFile to fd 2...
	I1025 08:33:26.830686   11907 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:33:26.830975   11907 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-2312/.minikube/bin
	I1025 08:33:26.831273   11907 mustload.go:65] Loading cluster: addons-468341
	I1025 08:33:26.831690   11907 config.go:182] Loaded profile config "addons-468341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:33:26.831713   11907 addons.go:606] checking whether the cluster is paused
	I1025 08:33:26.831855   11907 config.go:182] Loaded profile config "addons-468341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:33:26.831874   11907 host.go:66] Checking if "addons-468341" exists ...
	I1025 08:33:26.846976   11907 cli_runner.go:164] Run: docker container inspect addons-468341 --format={{.State.Status}}
	I1025 08:33:26.879506   11907 ssh_runner.go:195] Run: systemctl --version
	I1025 08:33:26.879563   11907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:33:26.901698   11907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/addons-468341/id_rsa Username:docker}
	I1025 08:33:27.013931   11907 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 08:33:27.014184   11907 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 08:33:27.102654   11907 cri.go:89] found id: "5fd1e4aa2eaec9a625e731b2310d7e53df6715992458f7b34f24c564a0371c93"
	I1025 08:33:27.102673   11907 cri.go:89] found id: "5f5e2ff55f9b9d96f5647e7d06d29ba2d01ac53c5f3bebb14c41ac3b15d46dbe"
	I1025 08:33:27.102678   11907 cri.go:89] found id: "fde287f23459127cc0438aff73855d690ce348105c0984d314227ebb893c0f27"
	I1025 08:33:27.102682   11907 cri.go:89] found id: "1c4a84678a48fe97a0d5e107fffed343bbdb5797c60914bfc1787ef22ec8aab6"
	I1025 08:33:27.102685   11907 cri.go:89] found id: "3976f771a2e1f4333179d9b66178f2b8d0e5df890f3e3934fa6b45cecc30471a"
	I1025 08:33:27.102689   11907 cri.go:89] found id: "c675c035a6dbbaf595ca82e7e5494c65d80846fa4403b093199b14027e8a1667"
	I1025 08:33:27.102692   11907 cri.go:89] found id: "f4608f1e203354a663895813be5940b353d5999288ba102363d0fc2ab5bee267"
	I1025 08:33:27.102695   11907 cri.go:89] found id: "2cb17a3d4c7c6272eb0c5b9a7105721c94d5c62089aae769495e41fd49a9130b"
	I1025 08:33:27.102698   11907 cri.go:89] found id: "9dfa9f05089922f4312c61cbba94162087a4b0ccb02343ec9524bd69b39eec86"
	I1025 08:33:27.102705   11907 cri.go:89] found id: "149fe53d8f125d10a2d1e5e32c97a97f55639e392cf1a6745398ed8ac79e7d66"
	I1025 08:33:27.102709   11907 cri.go:89] found id: "c0dd415aff39e64402a3d64867549a7a3b05dfcf210f7e1f94409f91384fa997"
	I1025 08:33:27.102712   11907 cri.go:89] found id: "3f2799b9c1b41eaec314d09a1269f3e858b9982860af8c4168ed54a9280b011d"
	I1025 08:33:27.102715   11907 cri.go:89] found id: "d658e0bd52e0d32446762f4e3466c7f28b2e31c7a876c4caaddd5444b238f373"
	I1025 08:33:27.102718   11907 cri.go:89] found id: "6abccf54d473ff21bf8b7e7a67510c44ab418b13e2ded840ecd714d35bc33050"
	I1025 08:33:27.102721   11907 cri.go:89] found id: "710588f58555a870efb846b35789c67ba50b0c47c5b86a2c12d2ebe8b7d3cf14"
	I1025 08:33:27.102726   11907 cri.go:89] found id: "990bc617d7987b2c3713ad5fcb4652f9f7697f9f8286262c1d0fbd61f03cb291"
	I1025 08:33:27.102729   11907 cri.go:89] found id: "3efcb5f51b3c46b7526ad1ba08ebb8af07f5ac32cc61ce8aa211066d05af5549"
	I1025 08:33:27.102733   11907 cri.go:89] found id: "6375e783d80c1ab19c196dea15de187c31ecf6231a480e3b354abf63f9986443"
	I1025 08:33:27.102736   11907 cri.go:89] found id: "11da55b7006a0c7ee4fdfa87ea50ea79c464ed85828bc75ef11dbecc5857aaa2"
	I1025 08:33:27.102739   11907 cri.go:89] found id: "a8a4b543d2547f083b802e2c98710bcc2f07776dfc49c8c41bb357abf197d215"
	I1025 08:33:27.102744   11907 cri.go:89] found id: "f40b4040bdb0ca8f6ba539c3a1cc4570ad8fc2d470fe2a3ee7023680adca8179"
	I1025 08:33:27.102747   11907 cri.go:89] found id: "ca4b0c8b5bb6a41514380e8fbc131cfd3c0036a7467eef4ccf7772196d699ebd"
	I1025 08:33:27.102750   11907 cri.go:89] found id: "2105d8a4af178f4195391e5b2d1ca656712cbf36e77394a9382e7c7a4b280b3c"
	I1025 08:33:27.102753   11907 cri.go:89] found id: ""
	I1025 08:33:27.102800   11907 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 08:33:27.136458   11907 out.go:203] 
	W1025 08:33:27.140004   11907 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:33:27Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:33:27Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 08:33:27.140035   11907 out.go:285] * 
	* 
	W1025 08:33:27.144178   11907 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 08:33:27.147980   11907 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-468341 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.69s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.28s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-w5ht9" [05248aa9-d292-4130-b10d-c632220baebb] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004133439s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-468341 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-468341 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (273.416511ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 08:33:18.246976   11370 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:33:18.247221   11370 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:33:18.247284   11370 out.go:374] Setting ErrFile to fd 2...
	I1025 08:33:18.247308   11370 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:33:18.248698   11370 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-2312/.minikube/bin
	I1025 08:33:18.249055   11370 mustload.go:65] Loading cluster: addons-468341
	I1025 08:33:18.249471   11370 config.go:182] Loaded profile config "addons-468341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:33:18.249491   11370 addons.go:606] checking whether the cluster is paused
	I1025 08:33:18.249636   11370 config.go:182] Loaded profile config "addons-468341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:33:18.249655   11370 host.go:66] Checking if "addons-468341" exists ...
	I1025 08:33:18.250159   11370 cli_runner.go:164] Run: docker container inspect addons-468341 --format={{.State.Status}}
	I1025 08:33:18.272189   11370 ssh_runner.go:195] Run: systemctl --version
	I1025 08:33:18.272243   11370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:33:18.293943   11370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/addons-468341/id_rsa Username:docker}
	I1025 08:33:18.396381   11370 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 08:33:18.396469   11370 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 08:33:18.428651   11370 cri.go:89] found id: "5fd1e4aa2eaec9a625e731b2310d7e53df6715992458f7b34f24c564a0371c93"
	I1025 08:33:18.428671   11370 cri.go:89] found id: "5f5e2ff55f9b9d96f5647e7d06d29ba2d01ac53c5f3bebb14c41ac3b15d46dbe"
	I1025 08:33:18.428676   11370 cri.go:89] found id: "fde287f23459127cc0438aff73855d690ce348105c0984d314227ebb893c0f27"
	I1025 08:33:18.428680   11370 cri.go:89] found id: "1c4a84678a48fe97a0d5e107fffed343bbdb5797c60914bfc1787ef22ec8aab6"
	I1025 08:33:18.428684   11370 cri.go:89] found id: "3976f771a2e1f4333179d9b66178f2b8d0e5df890f3e3934fa6b45cecc30471a"
	I1025 08:33:18.428687   11370 cri.go:89] found id: "c675c035a6dbbaf595ca82e7e5494c65d80846fa4403b093199b14027e8a1667"
	I1025 08:33:18.428691   11370 cri.go:89] found id: "f4608f1e203354a663895813be5940b353d5999288ba102363d0fc2ab5bee267"
	I1025 08:33:18.428694   11370 cri.go:89] found id: "2cb17a3d4c7c6272eb0c5b9a7105721c94d5c62089aae769495e41fd49a9130b"
	I1025 08:33:18.428697   11370 cri.go:89] found id: "9dfa9f05089922f4312c61cbba94162087a4b0ccb02343ec9524bd69b39eec86"
	I1025 08:33:18.428710   11370 cri.go:89] found id: "149fe53d8f125d10a2d1e5e32c97a97f55639e392cf1a6745398ed8ac79e7d66"
	I1025 08:33:18.428718   11370 cri.go:89] found id: "c0dd415aff39e64402a3d64867549a7a3b05dfcf210f7e1f94409f91384fa997"
	I1025 08:33:18.428722   11370 cri.go:89] found id: "3f2799b9c1b41eaec314d09a1269f3e858b9982860af8c4168ed54a9280b011d"
	I1025 08:33:18.428725   11370 cri.go:89] found id: "d658e0bd52e0d32446762f4e3466c7f28b2e31c7a876c4caaddd5444b238f373"
	I1025 08:33:18.428728   11370 cri.go:89] found id: "6abccf54d473ff21bf8b7e7a67510c44ab418b13e2ded840ecd714d35bc33050"
	I1025 08:33:18.428734   11370 cri.go:89] found id: "710588f58555a870efb846b35789c67ba50b0c47c5b86a2c12d2ebe8b7d3cf14"
	I1025 08:33:18.428746   11370 cri.go:89] found id: "990bc617d7987b2c3713ad5fcb4652f9f7697f9f8286262c1d0fbd61f03cb291"
	I1025 08:33:18.428753   11370 cri.go:89] found id: "3efcb5f51b3c46b7526ad1ba08ebb8af07f5ac32cc61ce8aa211066d05af5549"
	I1025 08:33:18.428758   11370 cri.go:89] found id: "6375e783d80c1ab19c196dea15de187c31ecf6231a480e3b354abf63f9986443"
	I1025 08:33:18.428761   11370 cri.go:89] found id: "11da55b7006a0c7ee4fdfa87ea50ea79c464ed85828bc75ef11dbecc5857aaa2"
	I1025 08:33:18.428764   11370 cri.go:89] found id: "a8a4b543d2547f083b802e2c98710bcc2f07776dfc49c8c41bb357abf197d215"
	I1025 08:33:18.428769   11370 cri.go:89] found id: "f40b4040bdb0ca8f6ba539c3a1cc4570ad8fc2d470fe2a3ee7023680adca8179"
	I1025 08:33:18.428772   11370 cri.go:89] found id: "ca4b0c8b5bb6a41514380e8fbc131cfd3c0036a7467eef4ccf7772196d699ebd"
	I1025 08:33:18.428775   11370 cri.go:89] found id: "2105d8a4af178f4195391e5b2d1ca656712cbf36e77394a9382e7c7a4b280b3c"
	I1025 08:33:18.428778   11370 cri.go:89] found id: ""
	I1025 08:33:18.428828   11370 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 08:33:18.449222   11370 out.go:203] 
	W1025 08:33:18.453686   11370 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:33:18Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:33:18Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 08:33:18.453709   11370 out.go:285] * 
	* 
	W1025 08:33:18.457526   11370 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 08:33:18.461444   11370 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-468341 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.28s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-6xxxr" [6afb84fa-6d1e-4784-86ea-170a6b61d9ea] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003978482s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-468341 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-468341 addons disable yakd --alsologtostderr -v=1: exit status 11 (261.335945ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 08:33:12.972692   11279 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:33:12.973074   11279 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:33:12.973110   11279 out.go:374] Setting ErrFile to fd 2...
	I1025 08:33:12.973131   11279 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:33:12.973457   11279 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-2312/.minikube/bin
	I1025 08:33:12.973804   11279 mustload.go:65] Loading cluster: addons-468341
	I1025 08:33:12.974282   11279 config.go:182] Loaded profile config "addons-468341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:33:12.974322   11279 addons.go:606] checking whether the cluster is paused
	I1025 08:33:12.974462   11279 config.go:182] Loaded profile config "addons-468341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:33:12.974493   11279 host.go:66] Checking if "addons-468341" exists ...
	I1025 08:33:12.974997   11279 cli_runner.go:164] Run: docker container inspect addons-468341 --format={{.State.Status}}
	I1025 08:33:12.992421   11279 ssh_runner.go:195] Run: systemctl --version
	I1025 08:33:12.992469   11279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-468341
	I1025 08:33:13.014882   11279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/addons-468341/id_rsa Username:docker}
	I1025 08:33:13.121939   11279 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 08:33:13.122042   11279 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 08:33:13.155923   11279 cri.go:89] found id: "5fd1e4aa2eaec9a625e731b2310d7e53df6715992458f7b34f24c564a0371c93"
	I1025 08:33:13.155955   11279 cri.go:89] found id: "5f5e2ff55f9b9d96f5647e7d06d29ba2d01ac53c5f3bebb14c41ac3b15d46dbe"
	I1025 08:33:13.155960   11279 cri.go:89] found id: "fde287f23459127cc0438aff73855d690ce348105c0984d314227ebb893c0f27"
	I1025 08:33:13.155964   11279 cri.go:89] found id: "1c4a84678a48fe97a0d5e107fffed343bbdb5797c60914bfc1787ef22ec8aab6"
	I1025 08:33:13.155967   11279 cri.go:89] found id: "3976f771a2e1f4333179d9b66178f2b8d0e5df890f3e3934fa6b45cecc30471a"
	I1025 08:33:13.155971   11279 cri.go:89] found id: "c675c035a6dbbaf595ca82e7e5494c65d80846fa4403b093199b14027e8a1667"
	I1025 08:33:13.155975   11279 cri.go:89] found id: "f4608f1e203354a663895813be5940b353d5999288ba102363d0fc2ab5bee267"
	I1025 08:33:13.155978   11279 cri.go:89] found id: "2cb17a3d4c7c6272eb0c5b9a7105721c94d5c62089aae769495e41fd49a9130b"
	I1025 08:33:13.155981   11279 cri.go:89] found id: "9dfa9f05089922f4312c61cbba94162087a4b0ccb02343ec9524bd69b39eec86"
	I1025 08:33:13.155990   11279 cri.go:89] found id: "149fe53d8f125d10a2d1e5e32c97a97f55639e392cf1a6745398ed8ac79e7d66"
	I1025 08:33:13.155994   11279 cri.go:89] found id: "c0dd415aff39e64402a3d64867549a7a3b05dfcf210f7e1f94409f91384fa997"
	I1025 08:33:13.155997   11279 cri.go:89] found id: "3f2799b9c1b41eaec314d09a1269f3e858b9982860af8c4168ed54a9280b011d"
	I1025 08:33:13.156000   11279 cri.go:89] found id: "d658e0bd52e0d32446762f4e3466c7f28b2e31c7a876c4caaddd5444b238f373"
	I1025 08:33:13.156005   11279 cri.go:89] found id: "6abccf54d473ff21bf8b7e7a67510c44ab418b13e2ded840ecd714d35bc33050"
	I1025 08:33:13.156008   11279 cri.go:89] found id: "710588f58555a870efb846b35789c67ba50b0c47c5b86a2c12d2ebe8b7d3cf14"
	I1025 08:33:13.156017   11279 cri.go:89] found id: "990bc617d7987b2c3713ad5fcb4652f9f7697f9f8286262c1d0fbd61f03cb291"
	I1025 08:33:13.156024   11279 cri.go:89] found id: "3efcb5f51b3c46b7526ad1ba08ebb8af07f5ac32cc61ce8aa211066d05af5549"
	I1025 08:33:13.156030   11279 cri.go:89] found id: "6375e783d80c1ab19c196dea15de187c31ecf6231a480e3b354abf63f9986443"
	I1025 08:33:13.156034   11279 cri.go:89] found id: "11da55b7006a0c7ee4fdfa87ea50ea79c464ed85828bc75ef11dbecc5857aaa2"
	I1025 08:33:13.156036   11279 cri.go:89] found id: "a8a4b543d2547f083b802e2c98710bcc2f07776dfc49c8c41bb357abf197d215"
	I1025 08:33:13.156041   11279 cri.go:89] found id: "f40b4040bdb0ca8f6ba539c3a1cc4570ad8fc2d470fe2a3ee7023680adca8179"
	I1025 08:33:13.156050   11279 cri.go:89] found id: "ca4b0c8b5bb6a41514380e8fbc131cfd3c0036a7467eef4ccf7772196d699ebd"
	I1025 08:33:13.156054   11279 cri.go:89] found id: "2105d8a4af178f4195391e5b2d1ca656712cbf36e77394a9382e7c7a4b280b3c"
	I1025 08:33:13.156057   11279 cri.go:89] found id: ""
	I1025 08:33:13.156117   11279 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 08:33:13.172558   11279 out.go:203] 
	W1025 08:33:13.175383   11279 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:33:13Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:33:13Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 08:33:13.175408   11279 out.go:285] * 
	* 
	W1025 08:33:13.179215   11279 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 08:33:13.181926   11279 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-468341 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-562171 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-562171 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-n7k4v" [7661065f-1663-4fd9-a12b-1487fd093564] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-562171 -n functional-562171
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-10-25 08:50:30.62505848 +0000 UTC m=+1247.708904891
functional_test.go:1645: (dbg) Run:  kubectl --context functional-562171 describe po hello-node-connect-7d85dfc575-n7k4v -n default
functional_test.go:1645: (dbg) kubectl --context functional-562171 describe po hello-node-connect-7d85dfc575-n7k4v -n default:
Name:             hello-node-connect-7d85dfc575-n7k4v
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-562171/192.168.49.2
Start Time:       Sat, 25 Oct 2025 08:40:30 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rsdq2 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-rsdq2:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-n7k4v to functional-562171
Normal   Pulling    7m6s (x5 over 9m58s)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m6s (x5 over 9m58s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m6s (x5 over 9m58s)    kubelet            Error: ErrImagePull
Normal   BackOff    4m53s (x21 over 9m58s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m53s (x21 over 9m58s)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-562171 logs hello-node-connect-7d85dfc575-n7k4v -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-562171 logs hello-node-connect-7d85dfc575-n7k4v -n default: exit status 1 (109.961081ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-n7k4v" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-562171 logs hello-node-connect-7d85dfc575-n7k4v -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-562171 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-n7k4v
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-562171/192.168.49.2
Start Time:       Sat, 25 Oct 2025 08:40:30 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rsdq2 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-rsdq2:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-n7k4v to functional-562171
Normal   Pulling    7m6s (x5 over 9m58s)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m6s (x5 over 9m58s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m6s (x5 over 9m58s)    kubelet            Error: ErrImagePull
Normal   BackOff    4m53s (x21 over 9m58s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m53s (x21 over 9m58s)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-562171 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-562171 logs -l app=hello-node-connect: exit status 1 (193.999312ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-n7k4v" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-562171 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-562171 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.105.92.53
IPs:                      10.105.92.53
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31021/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-562171
helpers_test.go:243: (dbg) docker inspect functional-562171:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d028b826e4913926bd5f25c1d3ac980f882761526d16f148847b42c95d66f5fd",
	        "Created": "2025-10-25T08:37:22.931744386Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 19976,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T08:37:22.993169485Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/d028b826e4913926bd5f25c1d3ac980f882761526d16f148847b42c95d66f5fd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d028b826e4913926bd5f25c1d3ac980f882761526d16f148847b42c95d66f5fd/hostname",
	        "HostsPath": "/var/lib/docker/containers/d028b826e4913926bd5f25c1d3ac980f882761526d16f148847b42c95d66f5fd/hosts",
	        "LogPath": "/var/lib/docker/containers/d028b826e4913926bd5f25c1d3ac980f882761526d16f148847b42c95d66f5fd/d028b826e4913926bd5f25c1d3ac980f882761526d16f148847b42c95d66f5fd-json.log",
	        "Name": "/functional-562171",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-562171:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-562171",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d028b826e4913926bd5f25c1d3ac980f882761526d16f148847b42c95d66f5fd",
	                "LowerDir": "/var/lib/docker/overlay2/8f2a13ca20614dca87dbdbe3f2b7fc892e6f3346de6bb3da9412e8bc568bc4de-init/diff:/var/lib/docker/overlay2/cef79fc16ca3e257f9c9390d34fae091a7f7fa219913b2606caf1922ea50ed93/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8f2a13ca20614dca87dbdbe3f2b7fc892e6f3346de6bb3da9412e8bc568bc4de/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8f2a13ca20614dca87dbdbe3f2b7fc892e6f3346de6bb3da9412e8bc568bc4de/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8f2a13ca20614dca87dbdbe3f2b7fc892e6f3346de6bb3da9412e8bc568bc4de/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-562171",
	                "Source": "/var/lib/docker/volumes/functional-562171/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-562171",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-562171",
	                "name.minikube.sigs.k8s.io": "functional-562171",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cba00f94e6430498e6af064e7989a4f58547532b49a2afd042a578c2765ecfd0",
	            "SandboxKey": "/var/run/docker/netns/cba00f94e643",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-562171": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7e:f4:2f:45:7e:80",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "412efb2695ba82727e501fdc4bd02c7f514c4bf8c12db75b6fd289193c332ee6",
	                    "EndpointID": "f0a8d4ac28da0cbaa037c3694dfba3d9c779a48133a081d4eded00d54f661201",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-562171",
	                        "d028b826e491"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-562171 -n functional-562171
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-562171 logs -n 25: (1.503447428s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-562171 ssh -n functional-562171 sudo cat /tmp/does/not/exist/cp-test.txt                                                                       │ functional-562171 │ jenkins │ v1.37.0 │ 25 Oct 25 08:40 UTC │ 25 Oct 25 08:40 UTC │
	│ ssh     │ functional-562171 ssh sudo cat /etc/ssl/certs/4110.pem                                                                                                    │ functional-562171 │ jenkins │ v1.37.0 │ 25 Oct 25 08:40 UTC │ 25 Oct 25 08:40 UTC │
	│ image   │ functional-562171 image load --daemon kicbase/echo-server:functional-562171 --alsologtostderr                                                             │ functional-562171 │ jenkins │ v1.37.0 │ 25 Oct 25 08:40 UTC │ 25 Oct 25 08:40 UTC │
	│ ssh     │ functional-562171 ssh sudo cat /usr/share/ca-certificates/4110.pem                                                                                        │ functional-562171 │ jenkins │ v1.37.0 │ 25 Oct 25 08:40 UTC │ 25 Oct 25 08:40 UTC │
	│ ssh     │ functional-562171 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-562171 │ jenkins │ v1.37.0 │ 25 Oct 25 08:40 UTC │ 25 Oct 25 08:40 UTC │
	│ image   │ functional-562171 image ls                                                                                                                                │ functional-562171 │ jenkins │ v1.37.0 │ 25 Oct 25 08:40 UTC │ 25 Oct 25 08:40 UTC │
	│ ssh     │ functional-562171 ssh sudo cat /etc/ssl/certs/41102.pem                                                                                                   │ functional-562171 │ jenkins │ v1.37.0 │ 25 Oct 25 08:40 UTC │ 25 Oct 25 08:40 UTC │
	│ image   │ functional-562171 image load --daemon kicbase/echo-server:functional-562171 --alsologtostderr                                                             │ functional-562171 │ jenkins │ v1.37.0 │ 25 Oct 25 08:40 UTC │ 25 Oct 25 08:40 UTC │
	│ ssh     │ functional-562171 ssh sudo cat /usr/share/ca-certificates/41102.pem                                                                                       │ functional-562171 │ jenkins │ v1.37.0 │ 25 Oct 25 08:40 UTC │ 25 Oct 25 08:40 UTC │
	│ ssh     │ functional-562171 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-562171 │ jenkins │ v1.37.0 │ 25 Oct 25 08:40 UTC │ 25 Oct 25 08:40 UTC │
	│ image   │ functional-562171 image ls                                                                                                                                │ functional-562171 │ jenkins │ v1.37.0 │ 25 Oct 25 08:40 UTC │ 25 Oct 25 08:40 UTC │
	│ ssh     │ functional-562171 ssh sudo cat /etc/test/nested/copy/4110/hosts                                                                                           │ functional-562171 │ jenkins │ v1.37.0 │ 25 Oct 25 08:40 UTC │ 25 Oct 25 08:40 UTC │
	│ image   │ functional-562171 image load --daemon kicbase/echo-server:functional-562171 --alsologtostderr                                                             │ functional-562171 │ jenkins │ v1.37.0 │ 25 Oct 25 08:40 UTC │ 25 Oct 25 08:40 UTC │
	│ image   │ functional-562171 image ls                                                                                                                                │ functional-562171 │ jenkins │ v1.37.0 │ 25 Oct 25 08:40 UTC │ 25 Oct 25 08:40 UTC │
	│ image   │ functional-562171 image save kicbase/echo-server:functional-562171 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-562171 │ jenkins │ v1.37.0 │ 25 Oct 25 08:40 UTC │ 25 Oct 25 08:40 UTC │
	│ image   │ functional-562171 image rm kicbase/echo-server:functional-562171 --alsologtostderr                                                                        │ functional-562171 │ jenkins │ v1.37.0 │ 25 Oct 25 08:40 UTC │ 25 Oct 25 08:40 UTC │
	│ ssh     │ functional-562171 ssh echo hello                                                                                                                          │ functional-562171 │ jenkins │ v1.37.0 │ 25 Oct 25 08:40 UTC │ 25 Oct 25 08:40 UTC │
	│ image   │ functional-562171 image ls                                                                                                                                │ functional-562171 │ jenkins │ v1.37.0 │ 25 Oct 25 08:40 UTC │ 25 Oct 25 08:40 UTC │
	│ ssh     │ functional-562171 ssh cat /etc/hostname                                                                                                                   │ functional-562171 │ jenkins │ v1.37.0 │ 25 Oct 25 08:40 UTC │ 25 Oct 25 08:40 UTC │
	│ image   │ functional-562171 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-562171 │ jenkins │ v1.37.0 │ 25 Oct 25 08:40 UTC │ 25 Oct 25 08:40 UTC │
	│ tunnel  │ functional-562171 tunnel --alsologtostderr                                                                                                                │ functional-562171 │ jenkins │ v1.37.0 │ 25 Oct 25 08:40 UTC │                     │
	│ image   │ functional-562171 image save --daemon kicbase/echo-server:functional-562171 --alsologtostderr                                                             │ functional-562171 │ jenkins │ v1.37.0 │ 25 Oct 25 08:40 UTC │ 25 Oct 25 08:40 UTC │
	│ tunnel  │ functional-562171 tunnel --alsologtostderr                                                                                                                │ functional-562171 │ jenkins │ v1.37.0 │ 25 Oct 25 08:40 UTC │                     │
	│ addons  │ functional-562171 addons list                                                                                                                             │ functional-562171 │ jenkins │ v1.37.0 │ 25 Oct 25 08:40 UTC │ 25 Oct 25 08:40 UTC │
	│ addons  │ functional-562171 addons list -o json                                                                                                                     │ functional-562171 │ jenkins │ v1.37.0 │ 25 Oct 25 08:40 UTC │ 25 Oct 25 08:40 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 08:39:12
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 08:39:12.587949   24139 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:39:12.589845   24139 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:39:12.589852   24139 out.go:374] Setting ErrFile to fd 2...
	I1025 08:39:12.589856   24139 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:39:12.590381   24139 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-2312/.minikube/bin
	I1025 08:39:12.590853   24139 out.go:368] Setting JSON to false
	I1025 08:39:12.592129   24139 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":1304,"bootTime":1761380249,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1025 08:39:12.592228   24139 start.go:141] virtualization:  
	I1025 08:39:12.598644   24139 out.go:179] * [functional-562171] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 08:39:12.601534   24139 out.go:179]   - MINIKUBE_LOCATION=21796
	I1025 08:39:12.601603   24139 notify.go:220] Checking for updates...
	I1025 08:39:12.607243   24139 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 08:39:12.610115   24139 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21796-2312/kubeconfig
	I1025 08:39:12.612972   24139 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-2312/.minikube
	I1025 08:39:12.615822   24139 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 08:39:12.618684   24139 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 08:39:12.622198   24139 config.go:182] Loaded profile config "functional-562171": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:39:12.622293   24139 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 08:39:12.645386   24139 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 08:39:12.645494   24139 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 08:39:12.709268   24139 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-25 08:39:12.700211888 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 08:39:12.709365   24139 docker.go:318] overlay module found
	I1025 08:39:12.712379   24139 out.go:179] * Using the docker driver based on existing profile
	I1025 08:39:12.720065   24139 start.go:305] selected driver: docker
	I1025 08:39:12.720075   24139 start.go:925] validating driver "docker" against &{Name:functional-562171 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-562171 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 08:39:12.720199   24139 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 08:39:12.720303   24139 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 08:39:12.778885   24139 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-25 08:39:12.769210219 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 08:39:12.779450   24139 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 08:39:12.779475   24139 cni.go:84] Creating CNI manager for ""
	I1025 08:39:12.779536   24139 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 08:39:12.779589   24139 start.go:349] cluster config:
	{Name:functional-562171 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-562171 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 08:39:12.784623   24139 out.go:179] * Starting "functional-562171" primary control-plane node in "functional-562171" cluster
	I1025 08:39:12.787518   24139 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 08:39:12.790474   24139 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 08:39:12.793323   24139 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 08:39:12.793371   24139 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21796-2312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1025 08:39:12.793379   24139 cache.go:58] Caching tarball of preloaded images
	I1025 08:39:12.793469   24139 preload.go:233] Found /home/jenkins/minikube-integration/21796-2312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 08:39:12.793477   24139 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 08:39:12.793593   24139 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/functional-562171/config.json ...
	I1025 08:39:12.793800   24139 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 08:39:12.815537   24139 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 08:39:12.815549   24139 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 08:39:12.815568   24139 cache.go:232] Successfully downloaded all kic artifacts
	I1025 08:39:12.815591   24139 start.go:360] acquireMachinesLock for functional-562171: {Name:mke18e7319fcd18bb15b49ffe0dda1f05d6df207 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 08:39:12.815655   24139 start.go:364] duration metric: took 47.139µs to acquireMachinesLock for "functional-562171"
	I1025 08:39:12.815674   24139 start.go:96] Skipping create...Using existing machine configuration
	I1025 08:39:12.815679   24139 fix.go:54] fixHost starting: 
	I1025 08:39:12.815958   24139 cli_runner.go:164] Run: docker container inspect functional-562171 --format={{.State.Status}}
	I1025 08:39:12.833186   24139 fix.go:112] recreateIfNeeded on functional-562171: state=Running err=<nil>
	W1025 08:39:12.833204   24139 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 08:39:12.836624   24139 out.go:252] * Updating the running docker "functional-562171" container ...
	I1025 08:39:12.836651   24139 machine.go:93] provisionDockerMachine start ...
	I1025 08:39:12.836729   24139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562171
	I1025 08:39:12.859102   24139 main.go:141] libmachine: Using SSH client type: native
	I1025 08:39:12.859409   24139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1025 08:39:12.859416   24139 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 08:39:13.014043   24139 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-562171
	
	I1025 08:39:13.014064   24139 ubuntu.go:182] provisioning hostname "functional-562171"
	I1025 08:39:13.014134   24139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562171
	I1025 08:39:13.034204   24139 main.go:141] libmachine: Using SSH client type: native
	I1025 08:39:13.034505   24139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1025 08:39:13.034518   24139 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-562171 && echo "functional-562171" | sudo tee /etc/hostname
	I1025 08:39:13.199872   24139 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-562171
	
	I1025 08:39:13.199952   24139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562171
	I1025 08:39:13.218548   24139 main.go:141] libmachine: Using SSH client type: native
	I1025 08:39:13.218863   24139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1025 08:39:13.218876   24139 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-562171' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-562171/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-562171' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 08:39:13.370257   24139 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 08:39:13.370274   24139 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21796-2312/.minikube CaCertPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21796-2312/.minikube}
	I1025 08:39:13.370291   24139 ubuntu.go:190] setting up certificates
	I1025 08:39:13.370298   24139 provision.go:84] configureAuth start
	I1025 08:39:13.370359   24139 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-562171
	I1025 08:39:13.389060   24139 provision.go:143] copyHostCerts
	I1025 08:39:13.389120   24139 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-2312/.minikube/ca.pem, removing ...
	I1025 08:39:13.389137   24139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-2312/.minikube/ca.pem
	I1025 08:39:13.389216   24139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21796-2312/.minikube/ca.pem (1082 bytes)
	I1025 08:39:13.389321   24139 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-2312/.minikube/cert.pem, removing ...
	I1025 08:39:13.389324   24139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-2312/.minikube/cert.pem
	I1025 08:39:13.389350   24139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21796-2312/.minikube/cert.pem (1123 bytes)
	I1025 08:39:13.389409   24139 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-2312/.minikube/key.pem, removing ...
	I1025 08:39:13.389412   24139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-2312/.minikube/key.pem
	I1025 08:39:13.389433   24139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21796-2312/.minikube/key.pem (1679 bytes)
	I1025 08:39:13.389487   24139 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21796-2312/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca-key.pem org=jenkins.functional-562171 san=[127.0.0.1 192.168.49.2 functional-562171 localhost minikube]
	I1025 08:39:13.914973   24139 provision.go:177] copyRemoteCerts
	I1025 08:39:13.915024   24139 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 08:39:13.915068   24139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562171
	I1025 08:39:13.932997   24139 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/functional-562171/id_rsa Username:docker}
	I1025 08:39:14.038420   24139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 08:39:14.058537   24139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1025 08:39:14.079405   24139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 08:39:14.097225   24139 provision.go:87] duration metric: took 726.904167ms to configureAuth
	I1025 08:39:14.097241   24139 ubuntu.go:206] setting minikube options for container-runtime
	I1025 08:39:14.097437   24139 config.go:182] Loaded profile config "functional-562171": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:39:14.097552   24139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562171
	I1025 08:39:14.117156   24139 main.go:141] libmachine: Using SSH client type: native
	I1025 08:39:14.117477   24139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1025 08:39:14.117491   24139 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 08:39:19.492517   24139 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 08:39:19.492529   24139 machine.go:96] duration metric: took 6.655871177s to provisionDockerMachine
	I1025 08:39:19.492538   24139 start.go:293] postStartSetup for "functional-562171" (driver="docker")
	I1025 08:39:19.492548   24139 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 08:39:19.492630   24139 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 08:39:19.492672   24139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562171
	I1025 08:39:19.511021   24139 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/functional-562171/id_rsa Username:docker}
	I1025 08:39:19.617911   24139 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 08:39:19.621169   24139 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 08:39:19.621189   24139 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 08:39:19.621198   24139 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-2312/.minikube/addons for local assets ...
	I1025 08:39:19.621252   24139 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-2312/.minikube/files for local assets ...
	I1025 08:39:19.621328   24139 filesync.go:149] local asset: /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem -> 41102.pem in /etc/ssl/certs
	I1025 08:39:19.621403   24139 filesync.go:149] local asset: /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/test/nested/copy/4110/hosts -> hosts in /etc/test/nested/copy/4110
	I1025 08:39:19.621443   24139 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4110
	I1025 08:39:19.629089   24139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem --> /etc/ssl/certs/41102.pem (1708 bytes)
	I1025 08:39:19.647076   24139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/test/nested/copy/4110/hosts --> /etc/test/nested/copy/4110/hosts (40 bytes)
	I1025 08:39:19.665215   24139 start.go:296] duration metric: took 172.663854ms for postStartSetup
	I1025 08:39:19.665300   24139 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 08:39:19.665338   24139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562171
	I1025 08:39:19.684665   24139 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/functional-562171/id_rsa Username:docker}
	I1025 08:39:19.787584   24139 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 08:39:19.792934   24139 fix.go:56] duration metric: took 6.977247588s for fixHost
	I1025 08:39:19.792948   24139 start.go:83] releasing machines lock for "functional-562171", held for 6.977286374s
	I1025 08:39:19.793014   24139 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-562171
	I1025 08:39:19.809253   24139 ssh_runner.go:195] Run: cat /version.json
	I1025 08:39:19.809308   24139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562171
	I1025 08:39:19.809577   24139 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 08:39:19.809624   24139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562171
	I1025 08:39:19.832018   24139 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/functional-562171/id_rsa Username:docker}
	I1025 08:39:19.839635   24139 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/functional-562171/id_rsa Username:docker}
	I1025 08:39:19.934045   24139 ssh_runner.go:195] Run: systemctl --version
	I1025 08:39:20.029330   24139 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 08:39:20.068995   24139 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 08:39:20.074507   24139 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 08:39:20.074573   24139 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 08:39:20.083188   24139 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 08:39:20.083204   24139 start.go:495] detecting cgroup driver to use...
	I1025 08:39:20.083239   24139 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 08:39:20.083288   24139 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 08:39:20.100421   24139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 08:39:20.114912   24139 docker.go:218] disabling cri-docker service (if available) ...
	I1025 08:39:20.114969   24139 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 08:39:20.131980   24139 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 08:39:20.145517   24139 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 08:39:20.287549   24139 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 08:39:20.427321   24139 docker.go:234] disabling docker service ...
	I1025 08:39:20.427398   24139 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 08:39:20.442704   24139 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 08:39:20.456471   24139 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 08:39:20.601966   24139 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 08:39:20.748380   24139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 08:39:20.762277   24139 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 08:39:20.776755   24139 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 08:39:20.776810   24139 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 08:39:20.785696   24139 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 08:39:20.785761   24139 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 08:39:20.794968   24139 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 08:39:20.803996   24139 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 08:39:20.812783   24139 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 08:39:20.820747   24139 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 08:39:20.829562   24139 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 08:39:20.838014   24139 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 08:39:20.847057   24139 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 08:39:20.855128   24139 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 08:39:20.862501   24139 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 08:39:21.008401   24139 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 08:39:27.062485   24139 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.054061474s)
	I1025 08:39:27.062501   24139 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 08:39:27.062550   24139 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 08:39:27.066323   24139 start.go:563] Will wait 60s for crictl version
	I1025 08:39:27.066388   24139 ssh_runner.go:195] Run: which crictl
	I1025 08:39:27.069849   24139 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 08:39:27.098393   24139 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 08:39:27.098464   24139 ssh_runner.go:195] Run: crio --version
	I1025 08:39:27.125330   24139 ssh_runner.go:195] Run: crio --version
	I1025 08:39:27.162850   24139 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 08:39:27.165849   24139 cli_runner.go:164] Run: docker network inspect functional-562171 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 08:39:27.182215   24139 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1025 08:39:27.189302   24139 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1025 08:39:27.192247   24139 kubeadm.go:883] updating cluster {Name:functional-562171 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-562171 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 08:39:27.192367   24139 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 08:39:27.192437   24139 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 08:39:27.226719   24139 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 08:39:27.226730   24139 crio.go:433] Images already preloaded, skipping extraction
	I1025 08:39:27.226781   24139 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 08:39:27.256381   24139 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 08:39:27.256392   24139 cache_images.go:85] Images are preloaded, skipping loading
	I1025 08:39:27.256398   24139 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1025 08:39:27.256487   24139 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-562171 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-562171 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 08:39:27.256563   24139 ssh_runner.go:195] Run: crio config
	I1025 08:39:27.328166   24139 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1025 08:39:27.328185   24139 cni.go:84] Creating CNI manager for ""
	I1025 08:39:27.328204   24139 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 08:39:27.328216   24139 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 08:39:27.328246   24139 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-562171 NodeName:functional-562171 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 08:39:27.328388   24139 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-562171"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 08:39:27.328463   24139 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 08:39:27.336419   24139 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 08:39:27.336484   24139 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 08:39:27.344091   24139 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1025 08:39:27.356661   24139 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 08:39:27.369472   24139 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2064 bytes)
	I1025 08:39:27.382487   24139 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1025 08:39:27.387524   24139 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 08:39:27.530549   24139 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 08:39:27.543821   24139 certs.go:69] Setting up /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/functional-562171 for IP: 192.168.49.2
	I1025 08:39:27.543832   24139 certs.go:195] generating shared ca certs ...
	I1025 08:39:27.543846   24139 certs.go:227] acquiring lock for ca certs: {Name:mke81a91ad41248e67f7acba9fb2e71e1c110e7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:39:27.543973   24139 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21796-2312/.minikube/ca.key
	I1025 08:39:27.544013   24139 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.key
	I1025 08:39:27.544019   24139 certs.go:257] generating profile certs ...
	I1025 08:39:27.544112   24139 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/functional-562171/client.key
	I1025 08:39:27.544158   24139 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/functional-562171/apiserver.key.9495e95b
	I1025 08:39:27.544190   24139 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/functional-562171/proxy-client.key
	I1025 08:39:27.544295   24139 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/4110.pem (1338 bytes)
	W1025 08:39:27.544322   24139 certs.go:480] ignoring /home/jenkins/minikube-integration/21796-2312/.minikube/certs/4110_empty.pem, impossibly tiny 0 bytes
	I1025 08:39:27.544328   24139 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 08:39:27.544354   24139 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem (1082 bytes)
	I1025 08:39:27.544380   24139 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem (1123 bytes)
	I1025 08:39:27.544399   24139 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/key.pem (1679 bytes)
	I1025 08:39:27.544442   24139 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem (1708 bytes)
	I1025 08:39:27.545007   24139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 08:39:27.562824   24139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 08:39:27.580435   24139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 08:39:27.599252   24139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 08:39:27.616017   24139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/functional-562171/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1025 08:39:27.633116   24139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/functional-562171/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 08:39:27.650817   24139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/functional-562171/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 08:39:27.668193   24139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/functional-562171/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 08:39:27.686289   24139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 08:39:27.704235   24139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/certs/4110.pem --> /usr/share/ca-certificates/4110.pem (1338 bytes)
	I1025 08:39:27.721943   24139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem --> /usr/share/ca-certificates/41102.pem (1708 bytes)
	I1025 08:39:27.739542   24139 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 08:39:27.752738   24139 ssh_runner.go:195] Run: openssl version
	I1025 08:39:27.759041   24139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 08:39:27.767753   24139 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 08:39:27.771708   24139 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1025 08:39:27.771762   24139 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 08:39:27.817879   24139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 08:39:27.826092   24139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4110.pem && ln -fs /usr/share/ca-certificates/4110.pem /etc/ssl/certs/4110.pem"
	I1025 08:39:27.834614   24139 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4110.pem
	I1025 08:39:27.838453   24139 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 08:37 /usr/share/ca-certificates/4110.pem
	I1025 08:39:27.838508   24139 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4110.pem
	I1025 08:39:27.879838   24139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4110.pem /etc/ssl/certs/51391683.0"
	I1025 08:39:27.889835   24139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41102.pem && ln -fs /usr/share/ca-certificates/41102.pem /etc/ssl/certs/41102.pem"
	I1025 08:39:27.902487   24139 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41102.pem
	I1025 08:39:27.908883   24139 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 08:37 /usr/share/ca-certificates/41102.pem
	I1025 08:39:27.908937   24139 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41102.pem
	I1025 08:39:27.983115   24139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41102.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 08:39:27.996436   24139 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 08:39:28.002972   24139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 08:39:28.085616   24139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 08:39:28.179880   24139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 08:39:28.297001   24139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 08:39:28.401441   24139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 08:39:28.474273   24139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 08:39:28.518926   24139 kubeadm.go:400] StartCluster: {Name:functional-562171 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-562171 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 08:39:28.519021   24139 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 08:39:28.519099   24139 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 08:39:28.556205   24139 cri.go:89] found id: "050f64f8b63e45822268cb6c34d9030c6a57b8ef7c0575bdec2f59e7e67b2089"
	I1025 08:39:28.556230   24139 cri.go:89] found id: "29b1f7b69d96d15dc35468a5b1234ae5d5c58af47abc4bac75a94d1bf736541f"
	I1025 08:39:28.556233   24139 cri.go:89] found id: "7ee461c0e642fc2844ed7d9012b0e1b7420ad5dedac7c2f4741532ca1a47f526"
	I1025 08:39:28.556236   24139 cri.go:89] found id: "fc8395732c0b9d9844b235f58eaa64199ecc4a2397cac477f7e2a17dcaf7405e"
	I1025 08:39:28.556238   24139 cri.go:89] found id: "07d8948e5379760ad6b0d1be7d0126a868b0184233e347ff32ca19e843072b2c"
	I1025 08:39:28.556241   24139 cri.go:89] found id: "5ee56eaa492c0992401aecc13f4ca65df558341eacee786a686667cdd4d925a7"
	I1025 08:39:28.556243   24139 cri.go:89] found id: "0693f9b169438ba5a5fc2178196152ca96ec4f2b0c22f78de417761a234fdcca"
	I1025 08:39:28.556246   24139 cri.go:89] found id: "dcae10c77abf7bbc8fbe13bf2dfcfdf59784820eda372386387c87bf668ca085"
	I1025 08:39:28.556248   24139 cri.go:89] found id: "9d00ce31e16995a25525dfc36083d3e7b8e1b041b011fad55cdcf0b88c29d644"
	I1025 08:39:28.556254   24139 cri.go:89] found id: "aa2e3d3790fa844a28349e633de63b5e055ba7ab8b75fa74ea0b0742e47fff64"
	I1025 08:39:28.556257   24139 cri.go:89] found id: "bf297aed6d55454eb2b593d8578d5231b5edcacedf2a091c859548581c0fdd69"
	I1025 08:39:28.556259   24139 cri.go:89] found id: "3feac1d616e8876f368eb124cd64e85bab44349dded3295535f3fde701f39ba3"
	I1025 08:39:28.556261   24139 cri.go:89] found id: "eba191b29be77545351ccde75977cd81fd6282e8f8d9d7b261637ebf52aec07a"
	I1025 08:39:28.556263   24139 cri.go:89] found id: "248da657a4403d4a40dd6c31c0a1a3ff7d06a5513936de910640cca61d946401"
	I1025 08:39:28.556265   24139 cri.go:89] found id: ""
	I1025 08:39:28.556325   24139 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 08:39:28.568189   24139 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:39:28Z" level=error msg="open /run/runc: no such file or directory"
	I1025 08:39:28.568274   24139 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 08:39:28.577067   24139 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 08:39:28.577076   24139 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 08:39:28.577139   24139 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 08:39:28.584501   24139 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 08:39:28.585075   24139 kubeconfig.go:125] found "functional-562171" server: "https://192.168.49.2:8441"
	I1025 08:39:28.586877   24139 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 08:39:28.594996   24139 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-25 08:37:29.931969456 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-25 08:39:27.377273593 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1025 08:39:28.595005   24139 kubeadm.go:1160] stopping kube-system containers ...
	I1025 08:39:28.595024   24139 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1025 08:39:28.595077   24139 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 08:39:28.655451   24139 cri.go:89] found id: "050f64f8b63e45822268cb6c34d9030c6a57b8ef7c0575bdec2f59e7e67b2089"
	I1025 08:39:28.655462   24139 cri.go:89] found id: "29b1f7b69d96d15dc35468a5b1234ae5d5c58af47abc4bac75a94d1bf736541f"
	I1025 08:39:28.655465   24139 cri.go:89] found id: "7ee461c0e642fc2844ed7d9012b0e1b7420ad5dedac7c2f4741532ca1a47f526"
	I1025 08:39:28.655468   24139 cri.go:89] found id: "fc8395732c0b9d9844b235f58eaa64199ecc4a2397cac477f7e2a17dcaf7405e"
	I1025 08:39:28.655470   24139 cri.go:89] found id: "07d8948e5379760ad6b0d1be7d0126a868b0184233e347ff32ca19e843072b2c"
	I1025 08:39:28.655474   24139 cri.go:89] found id: "5ee56eaa492c0992401aecc13f4ca65df558341eacee786a686667cdd4d925a7"
	I1025 08:39:28.655476   24139 cri.go:89] found id: "0693f9b169438ba5a5fc2178196152ca96ec4f2b0c22f78de417761a234fdcca"
	I1025 08:39:28.655481   24139 cri.go:89] found id: "dcae10c77abf7bbc8fbe13bf2dfcfdf59784820eda372386387c87bf668ca085"
	I1025 08:39:28.655490   24139 cri.go:89] found id: "9d00ce31e16995a25525dfc36083d3e7b8e1b041b011fad55cdcf0b88c29d644"
	I1025 08:39:28.655496   24139 cri.go:89] found id: "aa2e3d3790fa844a28349e633de63b5e055ba7ab8b75fa74ea0b0742e47fff64"
	I1025 08:39:28.655498   24139 cri.go:89] found id: "bf297aed6d55454eb2b593d8578d5231b5edcacedf2a091c859548581c0fdd69"
	I1025 08:39:28.655500   24139 cri.go:89] found id: "3feac1d616e8876f368eb124cd64e85bab44349dded3295535f3fde701f39ba3"
	I1025 08:39:28.655502   24139 cri.go:89] found id: "eba191b29be77545351ccde75977cd81fd6282e8f8d9d7b261637ebf52aec07a"
	I1025 08:39:28.655505   24139 cri.go:89] found id: "248da657a4403d4a40dd6c31c0a1a3ff7d06a5513936de910640cca61d946401"
	I1025 08:39:28.655507   24139 cri.go:89] found id: ""
	I1025 08:39:28.655519   24139 cri.go:252] Stopping containers: [050f64f8b63e45822268cb6c34d9030c6a57b8ef7c0575bdec2f59e7e67b2089 29b1f7b69d96d15dc35468a5b1234ae5d5c58af47abc4bac75a94d1bf736541f 7ee461c0e642fc2844ed7d9012b0e1b7420ad5dedac7c2f4741532ca1a47f526 fc8395732c0b9d9844b235f58eaa64199ecc4a2397cac477f7e2a17dcaf7405e 07d8948e5379760ad6b0d1be7d0126a868b0184233e347ff32ca19e843072b2c 5ee56eaa492c0992401aecc13f4ca65df558341eacee786a686667cdd4d925a7 0693f9b169438ba5a5fc2178196152ca96ec4f2b0c22f78de417761a234fdcca dcae10c77abf7bbc8fbe13bf2dfcfdf59784820eda372386387c87bf668ca085 9d00ce31e16995a25525dfc36083d3e7b8e1b041b011fad55cdcf0b88c29d644 aa2e3d3790fa844a28349e633de63b5e055ba7ab8b75fa74ea0b0742e47fff64 bf297aed6d55454eb2b593d8578d5231b5edcacedf2a091c859548581c0fdd69 3feac1d616e8876f368eb124cd64e85bab44349dded3295535f3fde701f39ba3 eba191b29be77545351ccde75977cd81fd6282e8f8d9d7b261637ebf52aec07a 248da657a4403d4a40dd6c31c0a1a3ff7d06a5513936de910640cca61d946401]
	I1025 08:39:28.655579   24139 ssh_runner.go:195] Run: which crictl
	I1025 08:39:28.662731   24139 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 050f64f8b63e45822268cb6c34d9030c6a57b8ef7c0575bdec2f59e7e67b2089 29b1f7b69d96d15dc35468a5b1234ae5d5c58af47abc4bac75a94d1bf736541f 7ee461c0e642fc2844ed7d9012b0e1b7420ad5dedac7c2f4741532ca1a47f526 fc8395732c0b9d9844b235f58eaa64199ecc4a2397cac477f7e2a17dcaf7405e 07d8948e5379760ad6b0d1be7d0126a868b0184233e347ff32ca19e843072b2c 5ee56eaa492c0992401aecc13f4ca65df558341eacee786a686667cdd4d925a7 0693f9b169438ba5a5fc2178196152ca96ec4f2b0c22f78de417761a234fdcca dcae10c77abf7bbc8fbe13bf2dfcfdf59784820eda372386387c87bf668ca085 9d00ce31e16995a25525dfc36083d3e7b8e1b041b011fad55cdcf0b88c29d644 aa2e3d3790fa844a28349e633de63b5e055ba7ab8b75fa74ea0b0742e47fff64 bf297aed6d55454eb2b593d8578d5231b5edcacedf2a091c859548581c0fdd69 3feac1d616e8876f368eb124cd64e85bab44349dded3295535f3fde701f39ba3 eba191b29be77545351ccde75977cd81fd6282e8f8d9d7b261637ebf52aec07a 248da657a4403d4a40dd6c31c0a1a3ff7d06a5513936de910640cca61d946401
	I1025 08:39:44.521307   24139 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl stop --timeout=10 050f64f8b63e45822268cb6c34d9030c6a57b8ef7c0575bdec2f59e7e67b2089 29b1f7b69d96d15dc35468a5b1234ae5d5c58af47abc4bac75a94d1bf736541f 7ee461c0e642fc2844ed7d9012b0e1b7420ad5dedac7c2f4741532ca1a47f526 fc8395732c0b9d9844b235f58eaa64199ecc4a2397cac477f7e2a17dcaf7405e 07d8948e5379760ad6b0d1be7d0126a868b0184233e347ff32ca19e843072b2c 5ee56eaa492c0992401aecc13f4ca65df558341eacee786a686667cdd4d925a7 0693f9b169438ba5a5fc2178196152ca96ec4f2b0c22f78de417761a234fdcca dcae10c77abf7bbc8fbe13bf2dfcfdf59784820eda372386387c87bf668ca085 9d00ce31e16995a25525dfc36083d3e7b8e1b041b011fad55cdcf0b88c29d644 aa2e3d3790fa844a28349e633de63b5e055ba7ab8b75fa74ea0b0742e47fff64 bf297aed6d55454eb2b593d8578d5231b5edcacedf2a091c859548581c0fdd69 3feac1d616e8876f368eb124cd64e85bab44349dded3295535f3fde701f39ba3 eba191b29be77545351ccde75977cd81fd6282e8f8d9d7b261637ebf52aec07a 248da657a4403d4a40dd6c31c0a1a3ff7d06a5513936de910640cca61d946401:
(15.858531447s)
	I1025 08:39:44.521370   24139 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1025 08:39:44.632731   24139 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 08:39:44.641051   24139 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5631 Oct 25 08:37 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Oct 25 08:37 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Oct 25 08:37 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Oct 25 08:37 /etc/kubernetes/scheduler.conf
	
	I1025 08:39:44.641105   24139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1025 08:39:44.648805   24139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1025 08:39:44.656700   24139 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1025 08:39:44.656751   24139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 08:39:44.664258   24139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1025 08:39:44.671881   24139 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1025 08:39:44.671936   24139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 08:39:44.679774   24139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1025 08:39:44.687403   24139 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1025 08:39:44.687459   24139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 08:39:44.694687   24139 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 08:39:44.702230   24139 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 08:39:44.751583   24139 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 08:39:46.769081   24139 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.017475981s)
	I1025 08:39:46.769137   24139 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1025 08:39:46.992248   24139 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 08:39:47.062226   24139 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1025 08:39:47.131349   24139 api_server.go:52] waiting for apiserver process to appear ...
	I1025 08:39:47.131421   24139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 08:39:47.631727   24139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 08:39:48.131825   24139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 08:39:48.149063   24139 api_server.go:72] duration metric: took 1.017721884s to wait for apiserver process to appear ...
	I1025 08:39:48.149076   24139 api_server.go:88] waiting for apiserver healthz status ...
	I1025 08:39:48.149093   24139 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1025 08:39:51.238195   24139 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1025 08:39:51.238219   24139 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1025 08:39:51.238233   24139 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1025 08:39:51.344341   24139 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1025 08:39:51.344371   24139 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1025 08:39:51.649821   24139 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1025 08:39:51.658712   24139 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 08:39:51.658726   24139 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 08:39:52.149220   24139 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1025 08:39:52.169476   24139 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 08:39:52.169492   24139 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 08:39:52.649266   24139 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1025 08:39:52.658564   24139 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 08:39:52.658599   24139 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 08:39:53.149857   24139 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1025 08:39:53.159965   24139 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1025 08:39:53.183168   24139 api_server.go:141] control plane version: v1.34.1
	I1025 08:39:53.183185   24139 api_server.go:131] duration metric: took 5.034103769s to wait for apiserver health ...
	I1025 08:39:53.183193   24139 cni.go:84] Creating CNI manager for ""
	I1025 08:39:53.183198   24139 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 08:39:53.186634   24139 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1025 08:39:53.189621   24139 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 08:39:53.194422   24139 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1025 08:39:53.194432   24139 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1025 08:39:53.208742   24139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 08:39:53.755661   24139 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 08:39:53.758969   24139 system_pods.go:59] 8 kube-system pods found
	I1025 08:39:53.758990   24139 system_pods.go:61] "coredns-66bc5c9577-gk4sr" [95fa01d7-c842-4ac8-8376-700f8deef15a] Running
	I1025 08:39:53.758999   24139 system_pods.go:61] "etcd-functional-562171" [690e4683-7305-49b3-9658-89f49e8060a0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 08:39:53.759003   24139 system_pods.go:61] "kindnet-qkqzc" [573a99c5-c37d-4e56-9b03-b1b232e32064] Running
	I1025 08:39:53.759010   24139 system_pods.go:61] "kube-apiserver-functional-562171" [719b16ab-c067-4eaf-9b3a-21c8f08e9e01] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 08:39:53.759018   24139 system_pods.go:61] "kube-controller-manager-functional-562171" [f2f0cec5-c56e-4e3c-a372-1f6e8c3b32ed] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 08:39:53.759023   24139 system_pods.go:61] "kube-proxy-jznbx" [5418d5d2-65ec-4bb4-87f7-d4fb5ba74044] Running
	I1025 08:39:53.759029   24139 system_pods.go:61] "kube-scheduler-functional-562171" [cca7a5b8-0b33-4674-bb81-00db714f86d2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 08:39:53.759032   24139 system_pods.go:61] "storage-provisioner" [5200b4f3-8077-4afd-a312-bbcd3f6ae29d] Running
	I1025 08:39:53.759037   24139 system_pods.go:74] duration metric: took 3.366832ms to wait for pod list to return data ...
	I1025 08:39:53.759043   24139 node_conditions.go:102] verifying NodePressure condition ...
	I1025 08:39:53.761704   24139 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 08:39:53.761727   24139 node_conditions.go:123] node cpu capacity is 2
	I1025 08:39:53.761737   24139 node_conditions.go:105] duration metric: took 2.68928ms to run NodePressure ...
	I1025 08:39:53.761795   24139 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 08:39:54.020340   24139 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1025 08:39:54.024579   24139 kubeadm.go:743] kubelet initialised
	I1025 08:39:54.024590   24139 kubeadm.go:744] duration metric: took 4.23893ms waiting for restarted kubelet to initialise ...
	I1025 08:39:54.024605   24139 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 08:39:54.034541   24139 ops.go:34] apiserver oom_adj: -16
	I1025 08:39:54.034552   24139 kubeadm.go:601] duration metric: took 25.457470908s to restartPrimaryControlPlane
	I1025 08:39:54.034567   24139 kubeadm.go:402] duration metric: took 25.515651035s to StartCluster
	I1025 08:39:54.034581   24139 settings.go:142] acquiring lock: {Name:mk0d8a31751f5e359928da7d7271367bd4b397fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:39:54.034653   24139 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21796-2312/kubeconfig
	I1025 08:39:54.035282   24139 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/kubeconfig: {Name:mk29749a0edbb941c188588ea6aef25a517ab571 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:39:54.035525   24139 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 08:39:54.035856   24139 config.go:182] Loaded profile config "functional-562171": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:39:54.035897   24139 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 08:39:54.035964   24139 addons.go:69] Setting storage-provisioner=true in profile "functional-562171"
	I1025 08:39:54.035977   24139 addons.go:238] Setting addon storage-provisioner=true in "functional-562171"
	W1025 08:39:54.035982   24139 addons.go:247] addon storage-provisioner should already be in state true
	I1025 08:39:54.035982   24139 addons.go:69] Setting default-storageclass=true in profile "functional-562171"
	I1025 08:39:54.036000   24139 host.go:66] Checking if "functional-562171" exists ...
	I1025 08:39:54.036006   24139 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-562171"
	I1025 08:39:54.036339   24139 cli_runner.go:164] Run: docker container inspect functional-562171 --format={{.State.Status}}
	I1025 08:39:54.036452   24139 cli_runner.go:164] Run: docker container inspect functional-562171 --format={{.State.Status}}
	I1025 08:39:54.039407   24139 out.go:179] * Verifying Kubernetes components...
	I1025 08:39:54.042497   24139 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 08:39:54.073464   24139 addons.go:238] Setting addon default-storageclass=true in "functional-562171"
	W1025 08:39:54.073474   24139 addons.go:247] addon default-storageclass should already be in state true
	I1025 08:39:54.073497   24139 host.go:66] Checking if "functional-562171" exists ...
	I1025 08:39:54.073978   24139 cli_runner.go:164] Run: docker container inspect functional-562171 --format={{.State.Status}}
	I1025 08:39:54.075520   24139 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 08:39:54.078858   24139 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 08:39:54.078869   24139 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 08:39:54.078950   24139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562171
	I1025 08:39:54.108842   24139 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 08:39:54.108855   24139 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 08:39:54.108917   24139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562171
	I1025 08:39:54.126912   24139 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/functional-562171/id_rsa Username:docker}
	I1025 08:39:54.143965   24139 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/functional-562171/id_rsa Username:docker}
	I1025 08:39:54.274958   24139 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 08:39:54.278702   24139 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 08:39:54.296800   24139 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 08:39:55.113360   24139 node_ready.go:35] waiting up to 6m0s for node "functional-562171" to be "Ready" ...
	I1025 08:39:55.117024   24139 node_ready.go:49] node "functional-562171" is "Ready"
	I1025 08:39:55.117049   24139 node_ready.go:38] duration metric: took 3.669998ms for node "functional-562171" to be "Ready" ...
	I1025 08:39:55.117061   24139 api_server.go:52] waiting for apiserver process to appear ...
	I1025 08:39:55.117127   24139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 08:39:55.126828   24139 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1025 08:39:55.129766   24139 addons.go:514] duration metric: took 1.093843911s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1025 08:39:55.130190   24139 api_server.go:72] duration metric: took 1.09464172s to wait for apiserver process to appear ...
	I1025 08:39:55.130200   24139 api_server.go:88] waiting for apiserver healthz status ...
	I1025 08:39:55.130220   24139 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1025 08:39:55.139218   24139 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1025 08:39:55.140202   24139 api_server.go:141] control plane version: v1.34.1
	I1025 08:39:55.140215   24139 api_server.go:131] duration metric: took 10.009739ms to wait for apiserver health ...
	I1025 08:39:55.140227   24139 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 08:39:55.144380   24139 system_pods.go:59] 8 kube-system pods found
	I1025 08:39:55.144396   24139 system_pods.go:61] "coredns-66bc5c9577-gk4sr" [95fa01d7-c842-4ac8-8376-700f8deef15a] Running
	I1025 08:39:55.144405   24139 system_pods.go:61] "etcd-functional-562171" [690e4683-7305-49b3-9658-89f49e8060a0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 08:39:55.144410   24139 system_pods.go:61] "kindnet-qkqzc" [573a99c5-c37d-4e56-9b03-b1b232e32064] Running
	I1025 08:39:55.144416   24139 system_pods.go:61] "kube-apiserver-functional-562171" [719b16ab-c067-4eaf-9b3a-21c8f08e9e01] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 08:39:55.144422   24139 system_pods.go:61] "kube-controller-manager-functional-562171" [f2f0cec5-c56e-4e3c-a372-1f6e8c3b32ed] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 08:39:55.144426   24139 system_pods.go:61] "kube-proxy-jznbx" [5418d5d2-65ec-4bb4-87f7-d4fb5ba74044] Running
	I1025 08:39:55.144431   24139 system_pods.go:61] "kube-scheduler-functional-562171" [cca7a5b8-0b33-4674-bb81-00db714f86d2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 08:39:55.144435   24139 system_pods.go:61] "storage-provisioner" [5200b4f3-8077-4afd-a312-bbcd3f6ae29d] Running
	I1025 08:39:55.144440   24139 system_pods.go:74] duration metric: took 4.208375ms to wait for pod list to return data ...
	I1025 08:39:55.144446   24139 default_sa.go:34] waiting for default service account to be created ...
	I1025 08:39:55.147416   24139 default_sa.go:45] found service account: "default"
	I1025 08:39:55.147429   24139 default_sa.go:55] duration metric: took 2.978693ms for default service account to be created ...
	I1025 08:39:55.147437   24139 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 08:39:55.150453   24139 system_pods.go:86] 8 kube-system pods found
	I1025 08:39:55.150468   24139 system_pods.go:89] "coredns-66bc5c9577-gk4sr" [95fa01d7-c842-4ac8-8376-700f8deef15a] Running
	I1025 08:39:55.150477   24139 system_pods.go:89] "etcd-functional-562171" [690e4683-7305-49b3-9658-89f49e8060a0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 08:39:55.150481   24139 system_pods.go:89] "kindnet-qkqzc" [573a99c5-c37d-4e56-9b03-b1b232e32064] Running
	I1025 08:39:55.150487   24139 system_pods.go:89] "kube-apiserver-functional-562171" [719b16ab-c067-4eaf-9b3a-21c8f08e9e01] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 08:39:55.150494   24139 system_pods.go:89] "kube-controller-manager-functional-562171" [f2f0cec5-c56e-4e3c-a372-1f6e8c3b32ed] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 08:39:55.150497   24139 system_pods.go:89] "kube-proxy-jznbx" [5418d5d2-65ec-4bb4-87f7-d4fb5ba74044] Running
	I1025 08:39:55.150502   24139 system_pods.go:89] "kube-scheduler-functional-562171" [cca7a5b8-0b33-4674-bb81-00db714f86d2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 08:39:55.150505   24139 system_pods.go:89] "storage-provisioner" [5200b4f3-8077-4afd-a312-bbcd3f6ae29d] Running
	I1025 08:39:55.150512   24139 system_pods.go:126] duration metric: took 3.069657ms to wait for k8s-apps to be running ...
	I1025 08:39:55.150519   24139 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 08:39:55.150575   24139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 08:39:55.164667   24139 system_svc.go:56] duration metric: took 14.137818ms WaitForService to wait for kubelet
	I1025 08:39:55.164686   24139 kubeadm.go:586] duration metric: took 1.129138983s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 08:39:55.164704   24139 node_conditions.go:102] verifying NodePressure condition ...
	I1025 08:39:55.168115   24139 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 08:39:55.168129   24139 node_conditions.go:123] node cpu capacity is 2
	I1025 08:39:55.168140   24139 node_conditions.go:105] duration metric: took 3.430677ms to run NodePressure ...
	I1025 08:39:55.168152   24139 start.go:241] waiting for startup goroutines ...
	I1025 08:39:55.168158   24139 start.go:246] waiting for cluster config update ...
	I1025 08:39:55.168168   24139 start.go:255] writing updated cluster config ...
	I1025 08:39:55.168471   24139 ssh_runner.go:195] Run: rm -f paused
	I1025 08:39:55.172325   24139 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 08:39:55.175899   24139 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gk4sr" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:39:55.181258   24139 pod_ready.go:94] pod "coredns-66bc5c9577-gk4sr" is "Ready"
	I1025 08:39:55.181271   24139 pod_ready.go:86] duration metric: took 5.358448ms for pod "coredns-66bc5c9577-gk4sr" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:39:55.184127   24139 pod_ready.go:83] waiting for pod "etcd-functional-562171" in "kube-system" namespace to be "Ready" or be gone ...
	W1025 08:39:57.189458   24139 pod_ready.go:104] pod "etcd-functional-562171" is not "Ready", error: <nil>
	I1025 08:39:57.690252   24139 pod_ready.go:94] pod "etcd-functional-562171" is "Ready"
	I1025 08:39:57.690265   24139 pod_ready.go:86] duration metric: took 2.506125091s for pod "etcd-functional-562171" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:39:57.692544   24139 pod_ready.go:83] waiting for pod "kube-apiserver-functional-562171" in "kube-system" namespace to be "Ready" or be gone ...
	W1025 08:39:59.698222   24139 pod_ready.go:104] pod "kube-apiserver-functional-562171" is not "Ready", error: <nil>
	W1025 08:40:01.699023   24139 pod_ready.go:104] pod "kube-apiserver-functional-562171" is not "Ready", error: <nil>
	I1025 08:40:03.198462   24139 pod_ready.go:94] pod "kube-apiserver-functional-562171" is "Ready"
	I1025 08:40:03.198476   24139 pod_ready.go:86] duration metric: took 5.505920897s for pod "kube-apiserver-functional-562171" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:40:03.200729   24139 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-562171" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:40:03.205616   24139 pod_ready.go:94] pod "kube-controller-manager-functional-562171" is "Ready"
	I1025 08:40:03.205630   24139 pod_ready.go:86] duration metric: took 4.889201ms for pod "kube-controller-manager-functional-562171" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:40:03.208114   24139 pod_ready.go:83] waiting for pod "kube-proxy-jznbx" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:40:03.213115   24139 pod_ready.go:94] pod "kube-proxy-jznbx" is "Ready"
	I1025 08:40:03.213128   24139 pod_ready.go:86] duration metric: took 5.001562ms for pod "kube-proxy-jznbx" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:40:03.215601   24139 pod_ready.go:83] waiting for pod "kube-scheduler-functional-562171" in "kube-system" namespace to be "Ready" or be gone ...
	W1025 08:40:05.221085   24139 pod_ready.go:104] pod "kube-scheduler-functional-562171" is not "Ready", error: <nil>
	I1025 08:40:05.720593   24139 pod_ready.go:94] pod "kube-scheduler-functional-562171" is "Ready"
	I1025 08:40:05.720606   24139 pod_ready.go:86] duration metric: took 2.504993295s for pod "kube-scheduler-functional-562171" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 08:40:05.720616   24139 pod_ready.go:40] duration metric: took 10.548269544s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 08:40:05.779608   24139 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1025 08:40:05.782649   24139 out.go:179] * Done! kubectl is now configured to use "functional-562171" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 25 08:40:46 functional-562171 crio[3517]: time="2025-10-25T08:40:46.215292905Z" level=info msg="Got pod network &{Name:hello-node-75c85bcc94-2rq78 Namespace:default ID:aabe64c00c28cdb9926d189583a20ed8571b0ad1354c173b1002bc207abfd793 UID:dcec5e0b-0bb3-448c-8e27-9b2f223bc9c7 NetNS:/var/run/netns/b7dd2855-3b4c-4e80-89b6-ede25289a842 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079da0}] Aliases:map[]}"
	Oct 25 08:40:46 functional-562171 crio[3517]: time="2025-10-25T08:40:46.215458888Z" level=info msg="Checking pod default_hello-node-75c85bcc94-2rq78 for CNI network kindnet (type=ptp)"
	Oct 25 08:40:46 functional-562171 crio[3517]: time="2025-10-25T08:40:46.218157281Z" level=info msg="Ran pod sandbox aabe64c00c28cdb9926d189583a20ed8571b0ad1354c173b1002bc207abfd793 with infra container: default/hello-node-75c85bcc94-2rq78/POD" id=5cc91bbd-0531-49ef-bf52-81f10294a393 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 08:40:46 functional-562171 crio[3517]: time="2025-10-25T08:40:46.22217036Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=c1295afc-a12f-4f1a-baca-a7af77bfcd95 name=/runtime.v1.ImageService/PullImage
	Oct 25 08:40:47 functional-562171 crio[3517]: time="2025-10-25T08:40:47.197025621Z" level=info msg="Stopping pod sandbox: a5281f676cbb32ddb912626480fd99d19cbac7e2eced413000ac2f82ede410a4" id=bc0694e4-bcf8-4ae0-b814-4dc7d9d7178c name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 25 08:40:47 functional-562171 crio[3517]: time="2025-10-25T08:40:47.197076493Z" level=info msg="Stopped pod sandbox (already stopped): a5281f676cbb32ddb912626480fd99d19cbac7e2eced413000ac2f82ede410a4" id=bc0694e4-bcf8-4ae0-b814-4dc7d9d7178c name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 25 08:40:47 functional-562171 crio[3517]: time="2025-10-25T08:40:47.197464402Z" level=info msg="Removing pod sandbox: a5281f676cbb32ddb912626480fd99d19cbac7e2eced413000ac2f82ede410a4" id=b4392460-3a92-4cdf-aac7-16414a861e64 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 25 08:40:47 functional-562171 crio[3517]: time="2025-10-25T08:40:47.201118938Z" level=info msg="Removed pod sandbox: a5281f676cbb32ddb912626480fd99d19cbac7e2eced413000ac2f82ede410a4" id=b4392460-3a92-4cdf-aac7-16414a861e64 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 25 08:40:47 functional-562171 crio[3517]: time="2025-10-25T08:40:47.202147788Z" level=info msg="Stopping pod sandbox: e0b032d97dfacd3cb339b04e70cbb86b212fe491edbd828692af3c450c054865" id=0f5e3378-0004-47c4-9bfa-e23eb72a3b16 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 25 08:40:47 functional-562171 crio[3517]: time="2025-10-25T08:40:47.202289606Z" level=info msg="Stopped pod sandbox (already stopped): e0b032d97dfacd3cb339b04e70cbb86b212fe491edbd828692af3c450c054865" id=0f5e3378-0004-47c4-9bfa-e23eb72a3b16 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 25 08:40:47 functional-562171 crio[3517]: time="2025-10-25T08:40:47.202660629Z" level=info msg="Removing pod sandbox: e0b032d97dfacd3cb339b04e70cbb86b212fe491edbd828692af3c450c054865" id=49e216d1-e830-48a9-bf6f-4783602358ee name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 25 08:40:47 functional-562171 crio[3517]: time="2025-10-25T08:40:47.206047996Z" level=info msg="Removed pod sandbox: e0b032d97dfacd3cb339b04e70cbb86b212fe491edbd828692af3c450c054865" id=49e216d1-e830-48a9-bf6f-4783602358ee name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 25 08:40:47 functional-562171 crio[3517]: time="2025-10-25T08:40:47.206516373Z" level=info msg="Stopping pod sandbox: 8f0757150e26d91d118f2c168b0aed112e25e8ab80fe5bd6b726a4abdab42992" id=93ff9e1b-afee-498a-a795-65f7955a8183 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 25 08:40:47 functional-562171 crio[3517]: time="2025-10-25T08:40:47.206562166Z" level=info msg="Stopped pod sandbox (already stopped): 8f0757150e26d91d118f2c168b0aed112e25e8ab80fe5bd6b726a4abdab42992" id=93ff9e1b-afee-498a-a795-65f7955a8183 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 25 08:40:47 functional-562171 crio[3517]: time="2025-10-25T08:40:47.206856158Z" level=info msg="Removing pod sandbox: 8f0757150e26d91d118f2c168b0aed112e25e8ab80fe5bd6b726a4abdab42992" id=7dddd734-2860-40de-a809-805f07290f6f name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 25 08:40:47 functional-562171 crio[3517]: time="2025-10-25T08:40:47.21155803Z" level=info msg="Removed pod sandbox: 8f0757150e26d91d118f2c168b0aed112e25e8ab80fe5bd6b726a4abdab42992" id=7dddd734-2860-40de-a809-805f07290f6f name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 25 08:41:01 functional-562171 crio[3517]: time="2025-10-25T08:41:01.164298638Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=f3a55ea9-e865-4bc5-ab01-eefc6ac0a56e name=/runtime.v1.ImageService/PullImage
	Oct 25 08:41:07 functional-562171 crio[3517]: time="2025-10-25T08:41:07.164537546Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=564a7562-ddee-46ed-b380-66142f7eb7e7 name=/runtime.v1.ImageService/PullImage
	Oct 25 08:41:26 functional-562171 crio[3517]: time="2025-10-25T08:41:26.162442862Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=8cf2fc76-73c3-4d3c-8c04-22cdd780a122 name=/runtime.v1.ImageService/PullImage
	Oct 25 08:41:55 functional-562171 crio[3517]: time="2025-10-25T08:41:55.163158373Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=492eb113-d46f-43ba-b0d7-4fd40b6d34eb name=/runtime.v1.ImageService/PullImage
	Oct 25 08:42:12 functional-562171 crio[3517]: time="2025-10-25T08:42:12.162618943Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=4deae259-d37c-48d9-9ded-c9244550232e name=/runtime.v1.ImageService/PullImage
	Oct 25 08:43:24 functional-562171 crio[3517]: time="2025-10-25T08:43:24.162589443Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=a166039b-1ced-4dba-9646-f5d291532f49 name=/runtime.v1.ImageService/PullImage
	Oct 25 08:43:40 functional-562171 crio[3517]: time="2025-10-25T08:43:40.162458376Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=dd4ad5dd-5ba6-48a5-921f-0a4905bf6d16 name=/runtime.v1.ImageService/PullImage
	Oct 25 08:46:16 functional-562171 crio[3517]: time="2025-10-25T08:46:16.162907407Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=40c879f0-24af-417f-b6e6-b04e88c4935e name=/runtime.v1.ImageService/PullImage
	Oct 25 08:46:31 functional-562171 crio[3517]: time="2025-10-25T08:46:31.163012325Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=ab0b36ea-f494-4eb6-bd6b-f85ef9624764 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	ac98b48aa5504       docker.io/library/nginx@sha256:68e62e210589c349f01d82308b45fbd6fb9b855f8b12cb27e11ad48dbfd0e43f   9 minutes ago       Running             myfrontend                0                   e86f194db0cfb       sp-pod                                      default
	e564ac96896f2       docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0   10 minutes ago      Running             nginx                     0                   edb643037185b       nginx-svc                                   default
	3f777dad508d4       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  10 minutes ago      Running             storage-provisioner       4                   203b630066500       storage-provisioner                         kube-system
	bf63bb2324c37       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  10 minutes ago      Running             kindnet-cni               3                   0442bc2eda0f0       kindnet-qkqzc                               kube-system
	2ebea907664ac       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  10 minutes ago      Running             kube-proxy                3                   002bf20db2275       kube-proxy-jznbx                            kube-system
	77c22745c7878       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                  10 minutes ago      Running             kube-apiserver            0                   9e3871bdc6a10       kube-apiserver-functional-562171            kube-system
	582aa378fa16e       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  10 minutes ago      Running             kube-controller-manager   3                   48727f442d00f       kube-controller-manager-functional-562171   kube-system
	d3350a475f95e       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  10 minutes ago      Running             etcd                      3                   da31997e6c994       etcd-functional-562171                      kube-system
	d0bd0b161fa1e       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  10 minutes ago      Running             kube-scheduler            3                   b9327a2363e44       kube-scheduler-functional-562171            kube-system
	84ae843e4ffef       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  10 minutes ago      Exited              storage-provisioner       3                   203b630066500       storage-provisioner                         kube-system
	61ff8f2694aa7       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  10 minutes ago      Running             coredns                   2                   8fc60faabb48c       coredns-66bc5c9577-gk4sr                    kube-system
	050f64f8b63e4       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  11 minutes ago      Exited              kube-scheduler            2                   b9327a2363e44       kube-scheduler-functional-562171            kube-system
	7ee461c0e642f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  11 minutes ago      Exited              kube-controller-manager   2                   48727f442d00f       kube-controller-manager-functional-562171   kube-system
	fc8395732c0b9       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  11 minutes ago      Exited              kindnet-cni               2                   0442bc2eda0f0       kindnet-qkqzc                               kube-system
	07d8948e53797       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  11 minutes ago      Exited              kube-proxy                2                   002bf20db2275       kube-proxy-jznbx                            kube-system
	5ee56eaa492c0       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  11 minutes ago      Exited              etcd                      2                   da31997e6c994       etcd-functional-562171                      kube-system
	3feac1d616e88       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  11 minutes ago      Exited              coredns                   1                   8fc60faabb48c       coredns-66bc5c9577-gk4sr                    kube-system
	
	
	==> coredns [3feac1d616e8876f368eb124cd64e85bab44349dded3295535f3fde701f39ba3] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54352 - 4434 "HINFO IN 3369846336770688185.4380642837687624839. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021878629s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [61ff8f2694aa789cd0201e5f797252206a6718544a0042befac37d06ff417737] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56350 - 39727 "HINFO IN 8397920839480075134.8164730081637793285. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.02301758s
	
	
	==> describe nodes <==
	Name:               functional-562171
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-562171
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3de5ce1075e776fa55a26fe8396669cc53a4373
	                    minikube.k8s.io/name=functional-562171
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T08_37_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 08:37:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-562171
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 08:50:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 08:50:23 +0000   Sat, 25 Oct 2025 08:37:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 08:50:23 +0000   Sat, 25 Oct 2025 08:37:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 08:50:23 +0000   Sat, 25 Oct 2025 08:37:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 08:50:23 +0000   Sat, 25 Oct 2025 08:38:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-562171
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                e220d4d8-48dd-4fc3-9b15-049e3743abd4
	  Boot ID:                    89aa6167-e700-462e-8969-7739a9f33dc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-2rq78                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m47s
	  default                     hello-node-connect-7d85dfc575-n7k4v          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m53s
	  kube-system                 coredns-66bc5c9577-gk4sr                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-functional-562171                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-qkqzc                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-562171             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-562171    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-jznbx                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-562171             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-562171 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-562171 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x8 over 12m)  kubelet          Node functional-562171 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-562171 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-562171 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-562171 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           12m                node-controller  Node functional-562171 event: Registered Node functional-562171 in Controller
	  Normal   NodeReady                12m                kubelet          Node functional-562171 status is now: NodeReady
	  Normal   RegisteredNode           11m                node-controller  Node functional-562171 event: Registered Node functional-562171 in Controller
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-562171 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-562171 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-562171 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node functional-562171 event: Registered Node functional-562171 in Controller
	
	
	==> dmesg <==
	[Oct25 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014683] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497292] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033389] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.792499] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.870372] kauditd_printk_skb: 36 callbacks suppressed
	[Oct25 08:30] overlayfs: idmapped layers are currently not supported
	[  +0.060360] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct25 08:36] overlayfs: idmapped layers are currently not supported
	[Oct25 08:37] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [5ee56eaa492c0992401aecc13f4ca65df558341eacee786a686667cdd4d925a7] <==
	{"level":"warn","ts":"2025-10-25T08:39:32.545296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:39:32.564440Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:39:32.589292Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:39:32.613153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:39:32.632584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:39:32.648349Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:39:32.763336Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41118","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-25T08:39:44.383356Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-25T08:39:44.383413Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-562171","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-25T08:39:44.383525Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-25T08:39:44.385071Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-25T08:39:44.386815Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-25T08:39:44.386869Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-10-25T08:39:44.386872Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-25T08:39:44.386924Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-10-25T08:39:44.386928Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"error","ts":"2025-10-25T08:39:44.386933Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-25T08:39:44.386967Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-25T08:39:44.386990Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-25T08:39:44.387002Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-25T08:39:44.387009Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-25T08:39:44.390918Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-25T08:39:44.391011Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-25T08:39:44.391036Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-25T08:39:44.391046Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-562171","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [d3350a475f95ecef3bd847a053e9d182d02f16a875c3900b0ad17e58a8b626c2] <==
	{"level":"warn","ts":"2025-10-25T08:39:49.855515Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:39:49.871263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:39:49.891489Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:39:49.906447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:39:49.922413Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:39:49.937451Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:39:49.966050Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:39:49.984182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:39:49.991220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:39:50.010897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:39:50.028794Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:39:50.049210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:39:50.069796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:39:50.083871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:39:50.107521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:39:50.120885Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:39:50.144465Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:39:50.161436Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:39:50.199731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:39:50.213571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:39:50.266042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T08:39:50.339284Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51908","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-25T08:49:48.810008Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1140}
	{"level":"info","ts":"2025-10-25T08:49:48.834276Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1140,"took":"23.965854ms","hash":653982729,"current-db-size-bytes":3301376,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":1433600,"current-db-size-in-use":"1.4 MB"}
	{"level":"info","ts":"2025-10-25T08:49:48.834333Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":653982729,"revision":1140,"compact-revision":-1}
	
	
	==> kernel <==
	 08:50:32 up 33 min,  0 user,  load average: 0.34, 0.38, 0.56
	Linux functional-562171 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [bf63bb2324c378824b35d28c2c5041ce7b13bfb60823e3926561f23a8ceee5f5] <==
	I1025 08:48:22.824938       1 main.go:301] handling current node
	I1025 08:48:32.825455       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:48:32.825489       1 main.go:301] handling current node
	I1025 08:48:42.822706       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:48:42.822837       1 main.go:301] handling current node
	I1025 08:48:52.828840       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:48:52.828943       1 main.go:301] handling current node
	I1025 08:49:02.820849       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:49:02.820887       1 main.go:301] handling current node
	I1025 08:49:12.823058       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:49:12.823089       1 main.go:301] handling current node
	I1025 08:49:22.821036       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:49:22.821068       1 main.go:301] handling current node
	I1025 08:49:32.820992       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:49:32.821049       1 main.go:301] handling current node
	I1025 08:49:42.820435       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:49:42.820559       1 main.go:301] handling current node
	I1025 08:49:52.820797       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:49:52.820831       1 main.go:301] handling current node
	I1025 08:50:02.829721       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:50:02.829759       1 main.go:301] handling current node
	I1025 08:50:12.822237       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:50:12.822289       1 main.go:301] handling current node
	I1025 08:50:22.825507       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:50:22.825542       1 main.go:301] handling current node
	
	
	==> kindnet [fc8395732c0b9d9844b235f58eaa64199ecc4a2397cac477f7e2a17dcaf7405e] <==
	I1025 08:39:28.125630       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 08:39:28.128021       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1025 08:39:28.128154       1 main.go:148] setting mtu 1500 for CNI 
	I1025 08:39:28.128166       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 08:39:28.128181       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T08:39:28Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 08:39:28.342157       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 08:39:28.342246       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 08:39:28.342280       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 08:39:28.347418       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1025 08:39:28.347673       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1025 08:39:28.347835       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1025 08:39:28.347970       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1025 08:39:28.348106       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1025 08:39:33.650213       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 08:39:33.650251       1 metrics.go:72] Registering metrics
	I1025 08:39:33.650324       1 controller.go:711] "Syncing nftables rules"
	I1025 08:39:38.342062       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 08:39:38.342125       1 main.go:301] handling current node
	
	
	==> kube-apiserver [77c22745c7878af3c9867677923e6d8d0913c9d7882131fdd481a2e76373371a] <==
	I1025 08:39:51.528107       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 08:39:51.528136       1 cache.go:39] Caches are synced for autoregister controller
	I1025 08:39:51.527827       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1025 08:39:51.527385       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1025 08:39:51.529065       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1025 08:39:51.527780       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1025 08:39:51.527792       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1025 08:39:51.527813       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 08:39:51.541058       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1025 08:39:52.139021       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 08:39:52.183553       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1025 08:39:52.744034       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1025 08:39:52.745361       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 08:39:52.751128       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 08:39:53.748476       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1025 08:39:53.884882       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 08:39:53.955322       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 08:39:53.966829       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 08:39:59.531547       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 08:40:09.089042       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.98.222.13"}
	I1025 08:40:21.460583       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.96.167.122"}
	I1025 08:40:30.268049       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.105.92.53"}
	E1025 08:40:38.798582       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:46356: use of closed network connection
	I1025 08:40:45.994837       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.100.21.181"}
	I1025 08:49:51.426529       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [582aa378fa16e26e7d7c02326fd48a6d41cd24926bcb4d76f365871c7db3c04d] <==
	I1025 08:39:54.845217       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1025 08:39:54.845299       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1025 08:39:54.846717       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1025 08:39:54.846990       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1025 08:39:54.853568       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1025 08:39:54.855842       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 08:39:54.859337       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1025 08:39:54.859482       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 08:39:54.859498       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 08:39:54.859505       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 08:39:54.861484       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1025 08:39:54.863542       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1025 08:39:54.864712       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1025 08:39:54.868933       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1025 08:39:54.869553       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1025 08:39:54.869663       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1025 08:39:54.869704       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1025 08:39:54.872124       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1025 08:39:54.878357       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 08:39:54.878391       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1025 08:39:54.882163       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1025 08:39:54.887403       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1025 08:39:54.896362       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1025 08:39:54.900373       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1025 08:39:54.901556       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	
	
	==> kube-controller-manager [7ee461c0e642fc2844ed7d9012b0e1b7420ad5dedac7c2f4741532ca1a47f526] <==
	I1025 08:39:29.481403       1 serving.go:386] Generated self-signed cert in-memory
	I1025 08:39:31.417612       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1025 08:39:31.417647       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 08:39:31.423436       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1025 08:39:31.424189       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1025 08:39:31.424323       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 08:39:31.424414       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E1025 08:39:43.467231       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-proxy [07d8948e5379760ad6b0d1be7d0126a868b0184233e347ff32ca19e843072b2c] <==
	I1025 08:39:33.764624       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 08:39:33.765015       1 server.go:527] "Version info" version="v1.34.1"
	I1025 08:39:33.765689       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 08:39:33.767409       1 config.go:200] "Starting service config controller"
	I1025 08:39:33.768600       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 08:39:33.767910       1 config.go:106] "Starting endpoint slice config controller"
	I1025 08:39:33.768701       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 08:39:33.767923       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 08:39:33.768755       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 08:39:33.768291       1 config.go:309] "Starting node config controller"
	I1025 08:39:33.768817       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 08:39:33.768846       1 shared_informer.go:356] "Caches are synced" controller="node config"
	E1025 08:39:33.771201       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8441/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1025 08:39:33.771418       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8441/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1025 08:39:33.771573       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1025 08:39:33.771720       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8441/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 08:39:34.796779       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1025 08:39:35.103050       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8441/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1025 08:39:35.176965       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8441/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 08:39:36.815617       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1025 08:39:37.076290       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8441/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 08:39:37.669570       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8441/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1025 08:39:40.430681       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1025 08:39:42.443991       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8441/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 08:39:43.486838       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8441/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	
	
	==> kube-proxy [2ebea907664ac0caae1be83d8c0b466d33576fd1b76f7268890a50ca767ac70d] <==
	I1025 08:39:52.764058       1 server_linux.go:53] "Using iptables proxy"
	I1025 08:39:52.903402       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 08:39:53.004552       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 08:39:53.004683       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1025 08:39:53.004805       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 08:39:53.047512       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 08:39:53.047643       1 server_linux.go:132] "Using iptables Proxier"
	I1025 08:39:53.053877       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 08:39:53.054509       1 server.go:527] "Version info" version="v1.34.1"
	I1025 08:39:53.054591       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 08:39:53.061389       1 config.go:106] "Starting endpoint slice config controller"
	I1025 08:39:53.061472       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 08:39:53.061814       1 config.go:200] "Starting service config controller"
	I1025 08:39:53.061872       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 08:39:53.061966       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 08:39:53.062732       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 08:39:53.063177       1 config.go:309] "Starting node config controller"
	I1025 08:39:53.063184       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 08:39:53.063189       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 08:39:53.166095       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 08:39:53.166932       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 08:39:53.167044       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [050f64f8b63e45822268cb6c34d9030c6a57b8ef7c0575bdec2f59e7e67b2089] <==
	I1025 08:39:30.466340       1 serving.go:386] Generated self-signed cert in-memory
	W1025 08:39:33.470422       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 08:39:33.470534       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 08:39:33.470568       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 08:39:33.470608       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 08:39:33.564159       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 08:39:33.564254       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E1025 08:39:33.564342       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I1025 08:39:33.571047       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 08:39:33.571136       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 08:39:33.571942       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 08:39:33.572016       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1025 08:39:33.574695       1 server.go:286] "handlers are not fully synchronized" err="context canceled"
	E1025 08:39:33.574824       1 shared_informer.go:352] "Unable to sync caches" logger="UnhandledError" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 08:39:33.574862       1 configmap_cafile_content.go:213] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 08:39:33.574976       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1025 08:39:33.575016       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1025 08:39:33.575057       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1025 08:39:33.575307       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1025 08:39:33.575372       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d0bd0b161fa1ed05a6d560be50c4d8f67d920ed3240092a1166a0b3f4fe303fc] <==
	I1025 08:39:50.111373       1 serving.go:386] Generated self-signed cert in-memory
	I1025 08:39:52.862070       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 08:39:52.862099       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 08:39:52.867184       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 08:39:52.867265       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1025 08:39:52.867303       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1025 08:39:52.867353       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 08:39:52.876141       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 08:39:52.876171       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 08:39:52.876197       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 08:39:52.876203       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 08:39:52.967443       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1025 08:39:52.976895       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 08:39:52.976969       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 08:47:45 functional-562171 kubelet[4099]: E1025 08:47:45.172833    4099 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-2rq78" podUID="dcec5e0b-0bb3-448c-8e27-9b2f223bc9c7"
	Oct 25 08:47:51 functional-562171 kubelet[4099]: E1025 08:47:51.162272    4099 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-n7k4v" podUID="7661065f-1663-4fd9-a12b-1487fd093564"
	Oct 25 08:48:00 functional-562171 kubelet[4099]: E1025 08:48:00.165404    4099 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-2rq78" podUID="dcec5e0b-0bb3-448c-8e27-9b2f223bc9c7"
	Oct 25 08:48:03 functional-562171 kubelet[4099]: E1025 08:48:03.162428    4099 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-n7k4v" podUID="7661065f-1663-4fd9-a12b-1487fd093564"
	Oct 25 08:48:11 functional-562171 kubelet[4099]: E1025 08:48:11.163236    4099 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-2rq78" podUID="dcec5e0b-0bb3-448c-8e27-9b2f223bc9c7"
	Oct 25 08:48:16 functional-562171 kubelet[4099]: E1025 08:48:16.162518    4099 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-n7k4v" podUID="7661065f-1663-4fd9-a12b-1487fd093564"
	Oct 25 08:48:25 functional-562171 kubelet[4099]: E1025 08:48:25.162286    4099 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-2rq78" podUID="dcec5e0b-0bb3-448c-8e27-9b2f223bc9c7"
	Oct 25 08:48:29 functional-562171 kubelet[4099]: E1025 08:48:29.161671    4099 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-n7k4v" podUID="7661065f-1663-4fd9-a12b-1487fd093564"
	Oct 25 08:48:40 functional-562171 kubelet[4099]: E1025 08:48:40.162355    4099 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-n7k4v" podUID="7661065f-1663-4fd9-a12b-1487fd093564"
	Oct 25 08:48:40 functional-562171 kubelet[4099]: E1025 08:48:40.162434    4099 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-2rq78" podUID="dcec5e0b-0bb3-448c-8e27-9b2f223bc9c7"
	Oct 25 08:48:52 functional-562171 kubelet[4099]: E1025 08:48:52.162120    4099 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-2rq78" podUID="dcec5e0b-0bb3-448c-8e27-9b2f223bc9c7"
	Oct 25 08:48:55 functional-562171 kubelet[4099]: E1025 08:48:55.162077    4099 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-n7k4v" podUID="7661065f-1663-4fd9-a12b-1487fd093564"
	Oct 25 08:49:04 functional-562171 kubelet[4099]: E1025 08:49:04.161545    4099 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-2rq78" podUID="dcec5e0b-0bb3-448c-8e27-9b2f223bc9c7"
	Oct 25 08:49:10 functional-562171 kubelet[4099]: E1025 08:49:10.161648    4099 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-n7k4v" podUID="7661065f-1663-4fd9-a12b-1487fd093564"
	Oct 25 08:49:15 functional-562171 kubelet[4099]: E1025 08:49:15.162615    4099 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-2rq78" podUID="dcec5e0b-0bb3-448c-8e27-9b2f223bc9c7"
	Oct 25 08:49:25 functional-562171 kubelet[4099]: E1025 08:49:25.163569    4099 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-n7k4v" podUID="7661065f-1663-4fd9-a12b-1487fd093564"
	Oct 25 08:49:30 functional-562171 kubelet[4099]: E1025 08:49:30.162161    4099 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-2rq78" podUID="dcec5e0b-0bb3-448c-8e27-9b2f223bc9c7"
	Oct 25 08:49:39 functional-562171 kubelet[4099]: E1025 08:49:39.163058    4099 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-n7k4v" podUID="7661065f-1663-4fd9-a12b-1487fd093564"
	Oct 25 08:49:43 functional-562171 kubelet[4099]: E1025 08:49:43.161948    4099 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-2rq78" podUID="dcec5e0b-0bb3-448c-8e27-9b2f223bc9c7"
	Oct 25 08:49:54 functional-562171 kubelet[4099]: E1025 08:49:54.162165    4099 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-n7k4v" podUID="7661065f-1663-4fd9-a12b-1487fd093564"
	Oct 25 08:49:58 functional-562171 kubelet[4099]: E1025 08:49:58.162320    4099 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-2rq78" podUID="dcec5e0b-0bb3-448c-8e27-9b2f223bc9c7"
	Oct 25 08:50:05 functional-562171 kubelet[4099]: E1025 08:50:05.162840    4099 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-n7k4v" podUID="7661065f-1663-4fd9-a12b-1487fd093564"
	Oct 25 08:50:09 functional-562171 kubelet[4099]: E1025 08:50:09.162277    4099 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-2rq78" podUID="dcec5e0b-0bb3-448c-8e27-9b2f223bc9c7"
	Oct 25 08:50:19 functional-562171 kubelet[4099]: E1025 08:50:19.162768    4099 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-n7k4v" podUID="7661065f-1663-4fd9-a12b-1487fd093564"
	Oct 25 08:50:23 functional-562171 kubelet[4099]: E1025 08:50:23.163479    4099 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-2rq78" podUID="dcec5e0b-0bb3-448c-8e27-9b2f223bc9c7"
	
	
	==> storage-provisioner [3f777dad508d4dbf88c28703032b19da9f85965f22a66897b832fa53b1986e4f] <==
	W1025 08:50:08.858629       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:50:10.861088       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:50:10.867741       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:50:12.871170       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:50:12.875998       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:50:14.879703       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:50:14.886058       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:50:16.888840       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:50:16.893076       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:50:18.897058       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:50:18.901695       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:50:20.905150       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:50:20.911647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:50:22.914821       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:50:22.919285       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:50:24.922307       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:50:24.929666       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:50:26.934096       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:50:26.938785       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:50:28.942714       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:50:28.947395       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:50:30.950929       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:50:30.959158       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:50:32.962579       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 08:50:32.971965       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [84ae843e4ffef4786df78de9ebadc381a59fe4779d845d8958d3301604703ff7] <==
	I1025 08:39:39.637274       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1025 08:39:39.639226       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-562171 -n functional-562171
helpers_test.go:269: (dbg) Run:  kubectl --context functional-562171 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-75c85bcc94-2rq78 hello-node-connect-7d85dfc575-n7k4v
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-562171 describe pod hello-node-75c85bcc94-2rq78 hello-node-connect-7d85dfc575-n7k4v
helpers_test.go:290: (dbg) kubectl --context functional-562171 describe pod hello-node-75c85bcc94-2rq78 hello-node-connect-7d85dfc575-n7k4v:

                                                
                                                
-- stdout --
	Name:             hello-node-75c85bcc94-2rq78
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-562171/192.168.49.2
	Start Time:       Sat, 25 Oct 2025 08:40:45 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-clg47 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-clg47:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m47s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-2rq78 to functional-562171
	  Normal   Pulling    6m53s (x5 over 9m47s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m53s (x5 over 9m47s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m53s (x5 over 9m47s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m44s (x21 over 9m47s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m44s (x21 over 9m47s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-n7k4v
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-562171/192.168.49.2
	Start Time:       Sat, 25 Oct 2025 08:40:30 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rsdq2 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-rsdq2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-n7k4v to functional-562171
	  Normal   Pulling    7m9s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m9s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m9s (x5 over 10m)    kubelet            Error: ErrImagePull
	  Normal   BackOff    4m56s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m56s (x21 over 10m)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 image load --daemon kicbase/echo-server:functional-562171 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-562171" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 image load --daemon kicbase/echo-server:functional-562171 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-562171" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-562171
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 image load --daemon kicbase/echo-server:functional-562171 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-562171" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 image save kicbase/echo-server:functional-562171 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1025 08:40:20.430715   28311 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:40:20.430879   28311 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:40:20.430891   28311 out.go:374] Setting ErrFile to fd 2...
	I1025 08:40:20.430898   28311 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:40:20.431260   28311 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-2312/.minikube/bin
	I1025 08:40:20.432199   28311 config.go:182] Loaded profile config "functional-562171": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:40:20.432345   28311 config.go:182] Loaded profile config "functional-562171": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:40:20.432980   28311 cli_runner.go:164] Run: docker container inspect functional-562171 --format={{.State.Status}}
	I1025 08:40:20.453592   28311 ssh_runner.go:195] Run: systemctl --version
	I1025 08:40:20.453656   28311 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562171
	I1025 08:40:20.478660   28311 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/functional-562171/id_rsa Username:docker}
	I1025 08:40:20.589213   28311 cache_images.go:290] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar
	W1025 08:40:20.589269   28311 cache_images.go:254] Failed to load cached images for "functional-562171": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar: no such file or directory
	I1025 08:40:20.589290   28311 cache_images.go:266] failed pushing to: functional-562171

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-562171
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 image save --daemon kicbase/echo-server:functional-562171 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-562171
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-562171: exit status 1 (26.239323ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-562171

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-562171

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-562171 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-562171 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-2rq78" [dcec5e0b-0bb3-448c-8e27-9b2f223bc9c7] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1025 08:42:57.139647    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:43:24.845684    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:47:57.140421    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-562171 -n functional-562171
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-10-25 08:50:46.449326 +0000 UTC m=+1263.533172403
functional_test.go:1460: (dbg) Run:  kubectl --context functional-562171 describe po hello-node-75c85bcc94-2rq78 -n default
functional_test.go:1460: (dbg) kubectl --context functional-562171 describe po hello-node-75c85bcc94-2rq78 -n default:
Name:             hello-node-75c85bcc94-2rq78
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-562171/192.168.49.2
Start Time:       Sat, 25 Oct 2025 08:40:45 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-clg47 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-clg47:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-2rq78 to functional-562171
Normal   Pulling    7m6s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m6s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m6s (x5 over 10m)    kubelet            Error: ErrImagePull
Normal   BackOff    4m57s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m57s (x21 over 10m)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-562171 logs hello-node-75c85bcc94-2rq78 -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-562171 logs hello-node-75c85bcc94-2rq78 -n default: exit status 1 (115.669248ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-2rq78" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-562171 logs hello-node-75c85bcc94-2rq78 -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.90s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-562171 service --namespace=default --https --url hello-node: exit status 115 (610.509386ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:32753
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-562171 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-562171 service hello-node --url --format={{.IP}}: exit status 115 (587.520177ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-562171 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-562171 service hello-node --url: exit status 115 (482.089761ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:32753
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-562171 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32753
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.48s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.65s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-446585 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-446585 --output=json --user=testUser: exit status 80 (2.64487982s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7fabf29b-8b8a-4d1e-8af3-86a69a2ce604","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-446585 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"155609f9-02e2-40e9-adcc-e90f7480bb68","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-25T09:03:45Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"960cd760-9bf8-4530-82ee-37b6e4470571","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-446585 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.86s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-446585 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-446585 --output=json --user=testUser: exit status 80 (1.860720135s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2230c757-836e-405b-adc0-5bb4f69e2a93","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-446585 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"0c436da9-70e5-4868-8436-c54e3d8d6d9c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-25T09:03:47Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"d3919152-f086-4c68-9b6f-7de0310d67ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-446585 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.86s)

                                                
                                    
x
+
TestPause/serial/Pause (7.21s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-993166 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-993166 --alsologtostderr -v=5: exit status 80 (2.049334862s)

                                                
                                                
-- stdout --
	* Pausing node pause-993166 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:27:41.188696  169701 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:27:41.189564  169701 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:27:41.189656  169701 out.go:374] Setting ErrFile to fd 2...
	I1025 09:27:41.189677  169701 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:27:41.190064  169701 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-2312/.minikube/bin
	I1025 09:27:41.190375  169701 out.go:368] Setting JSON to false
	I1025 09:27:41.190426  169701 mustload.go:65] Loading cluster: pause-993166
	I1025 09:27:41.190919  169701 config.go:182] Loaded profile config "pause-993166": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:27:41.191428  169701 cli_runner.go:164] Run: docker container inspect pause-993166 --format={{.State.Status}}
	I1025 09:27:41.209520  169701 host.go:66] Checking if "pause-993166" exists ...
	I1025 09:27:41.210034  169701 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:27:41.318051  169701 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-25 09:27:41.30809393 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:27:41.318729  169701 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-993166 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1025 09:27:41.323113  169701 out.go:179] * Pausing node pause-993166 ... 
	I1025 09:27:41.326038  169701 host.go:66] Checking if "pause-993166" exists ...
	I1025 09:27:41.326378  169701 ssh_runner.go:195] Run: systemctl --version
	I1025 09:27:41.326419  169701 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-993166
	I1025 09:27:41.352386  169701 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33023 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/pause-993166/id_rsa Username:docker}
	I1025 09:27:41.461093  169701 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:27:41.474700  169701 pause.go:52] kubelet running: true
	I1025 09:27:41.474814  169701 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 09:27:41.717571  169701 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 09:27:41.717686  169701 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 09:27:41.785831  169701 cri.go:89] found id: "2ca61daa9f640085335cff12d469a97aac33ed0ac86bb44265dd873b1b88ea7b"
	I1025 09:27:41.785854  169701 cri.go:89] found id: "886db0afca69285c74060f184d80584ce967a64cc5c10a575ddd4e8bee524b4c"
	I1025 09:27:41.785860  169701 cri.go:89] found id: "47fe39f906d8d7285850fca5853bc4537e57f459e57fd794878c97272cbeb938"
	I1025 09:27:41.785863  169701 cri.go:89] found id: "714664c95ab9237844e38f812333c61dd85473b8f3a9fe85af446cf295917418"
	I1025 09:27:41.785867  169701 cri.go:89] found id: "e8b340e357e7745dcfdd28f2f2837779d619fbcc48c0299f33f01bc4c4338c4d"
	I1025 09:27:41.785870  169701 cri.go:89] found id: "8ffe0ddc38b5cd6b1ec6e998b477c88dfe3de5017eec479555b1a02a271662c6"
	I1025 09:27:41.785875  169701 cri.go:89] found id: "76e14f6fbf01f85f84b7dbe2758815257045635f805bae17697c1057870d2e45"
	I1025 09:27:41.785878  169701 cri.go:89] found id: "2096c30fd3aa94582633e4db5513e39832fe004de98978844e87c7666828be6d"
	I1025 09:27:41.785881  169701 cri.go:89] found id: "8c0bc05f35e5cfd23f679343fe56282677202e373ae2cbd191ee4a80dd1cc492"
	I1025 09:27:41.785888  169701 cri.go:89] found id: "f33ce8d9cda081620ca6bb2e65c2a49aa70fd1d8d5e3fe5766fdce8e06ebedba"
	I1025 09:27:41.785895  169701 cri.go:89] found id: "69577785cdf0018c99ee3138be8b8466664873956fae9df607ab4b9f0211856b"
	I1025 09:27:41.785898  169701 cri.go:89] found id: "0bcb1e61aa4954f93376eff6871c63bd7fef85c6400a8063b3d8ccb280fc9dec"
	I1025 09:27:41.785902  169701 cri.go:89] found id: "eef32253d3c56e41d04f3cfb281703e63313570f9ae5713544b1d85e07c65fdb"
	I1025 09:27:41.785905  169701 cri.go:89] found id: "0e45c49e84a25010e320a542407973a38a5f8065f7093dd7b3d26e2e6c546c62"
	I1025 09:27:41.785908  169701 cri.go:89] found id: "1f115f72025a0c69095b7e23981b889c5f6b849f9233c4dc87b8320007c8dc3a"
	I1025 09:27:41.785913  169701 cri.go:89] found id: "ffc064adb3057dcbcb7e698a5374601d1a883faa933a2d0a24564611c6950319"
	I1025 09:27:41.785924  169701 cri.go:89] found id: ""
	I1025 09:27:41.785971  169701 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:27:41.796691  169701 retry.go:31] will retry after 297.875174ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:27:41Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:27:42.095217  169701 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:27:42.113797  169701 pause.go:52] kubelet running: false
	I1025 09:27:42.113884  169701 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 09:27:42.298299  169701 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 09:27:42.298401  169701 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 09:27:42.367745  169701 cri.go:89] found id: "2ca61daa9f640085335cff12d469a97aac33ed0ac86bb44265dd873b1b88ea7b"
	I1025 09:27:42.367765  169701 cri.go:89] found id: "886db0afca69285c74060f184d80584ce967a64cc5c10a575ddd4e8bee524b4c"
	I1025 09:27:42.367775  169701 cri.go:89] found id: "47fe39f906d8d7285850fca5853bc4537e57f459e57fd794878c97272cbeb938"
	I1025 09:27:42.367779  169701 cri.go:89] found id: "714664c95ab9237844e38f812333c61dd85473b8f3a9fe85af446cf295917418"
	I1025 09:27:42.367783  169701 cri.go:89] found id: "e8b340e357e7745dcfdd28f2f2837779d619fbcc48c0299f33f01bc4c4338c4d"
	I1025 09:27:42.367786  169701 cri.go:89] found id: "8ffe0ddc38b5cd6b1ec6e998b477c88dfe3de5017eec479555b1a02a271662c6"
	I1025 09:27:42.367789  169701 cri.go:89] found id: "76e14f6fbf01f85f84b7dbe2758815257045635f805bae17697c1057870d2e45"
	I1025 09:27:42.367814  169701 cri.go:89] found id: "2096c30fd3aa94582633e4db5513e39832fe004de98978844e87c7666828be6d"
	I1025 09:27:42.367822  169701 cri.go:89] found id: "8c0bc05f35e5cfd23f679343fe56282677202e373ae2cbd191ee4a80dd1cc492"
	I1025 09:27:42.367829  169701 cri.go:89] found id: "f33ce8d9cda081620ca6bb2e65c2a49aa70fd1d8d5e3fe5766fdce8e06ebedba"
	I1025 09:27:42.367843  169701 cri.go:89] found id: "69577785cdf0018c99ee3138be8b8466664873956fae9df607ab4b9f0211856b"
	I1025 09:27:42.367846  169701 cri.go:89] found id: "0bcb1e61aa4954f93376eff6871c63bd7fef85c6400a8063b3d8ccb280fc9dec"
	I1025 09:27:42.367849  169701 cri.go:89] found id: "eef32253d3c56e41d04f3cfb281703e63313570f9ae5713544b1d85e07c65fdb"
	I1025 09:27:42.367852  169701 cri.go:89] found id: "0e45c49e84a25010e320a542407973a38a5f8065f7093dd7b3d26e2e6c546c62"
	I1025 09:27:42.367855  169701 cri.go:89] found id: "1f115f72025a0c69095b7e23981b889c5f6b849f9233c4dc87b8320007c8dc3a"
	I1025 09:27:42.367860  169701 cri.go:89] found id: "ffc064adb3057dcbcb7e698a5374601d1a883faa933a2d0a24564611c6950319"
	I1025 09:27:42.367866  169701 cri.go:89] found id: ""
	I1025 09:27:42.367930  169701 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:27:42.379260  169701 retry.go:31] will retry after 538.797019ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:27:42Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:27:42.919150  169701 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:27:42.932107  169701 pause.go:52] kubelet running: false
	I1025 09:27:42.932209  169701 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 09:27:43.091889  169701 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 09:27:43.092016  169701 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 09:27:43.160371  169701 cri.go:89] found id: "2ca61daa9f640085335cff12d469a97aac33ed0ac86bb44265dd873b1b88ea7b"
	I1025 09:27:43.160392  169701 cri.go:89] found id: "886db0afca69285c74060f184d80584ce967a64cc5c10a575ddd4e8bee524b4c"
	I1025 09:27:43.160397  169701 cri.go:89] found id: "47fe39f906d8d7285850fca5853bc4537e57f459e57fd794878c97272cbeb938"
	I1025 09:27:43.160401  169701 cri.go:89] found id: "714664c95ab9237844e38f812333c61dd85473b8f3a9fe85af446cf295917418"
	I1025 09:27:43.160405  169701 cri.go:89] found id: "e8b340e357e7745dcfdd28f2f2837779d619fbcc48c0299f33f01bc4c4338c4d"
	I1025 09:27:43.160408  169701 cri.go:89] found id: "8ffe0ddc38b5cd6b1ec6e998b477c88dfe3de5017eec479555b1a02a271662c6"
	I1025 09:27:43.160411  169701 cri.go:89] found id: "76e14f6fbf01f85f84b7dbe2758815257045635f805bae17697c1057870d2e45"
	I1025 09:27:43.160414  169701 cri.go:89] found id: "2096c30fd3aa94582633e4db5513e39832fe004de98978844e87c7666828be6d"
	I1025 09:27:43.160417  169701 cri.go:89] found id: "8c0bc05f35e5cfd23f679343fe56282677202e373ae2cbd191ee4a80dd1cc492"
	I1025 09:27:43.160425  169701 cri.go:89] found id: "f33ce8d9cda081620ca6bb2e65c2a49aa70fd1d8d5e3fe5766fdce8e06ebedba"
	I1025 09:27:43.160428  169701 cri.go:89] found id: "69577785cdf0018c99ee3138be8b8466664873956fae9df607ab4b9f0211856b"
	I1025 09:27:43.160431  169701 cri.go:89] found id: "0bcb1e61aa4954f93376eff6871c63bd7fef85c6400a8063b3d8ccb280fc9dec"
	I1025 09:27:43.160434  169701 cri.go:89] found id: "eef32253d3c56e41d04f3cfb281703e63313570f9ae5713544b1d85e07c65fdb"
	I1025 09:27:43.160441  169701 cri.go:89] found id: "0e45c49e84a25010e320a542407973a38a5f8065f7093dd7b3d26e2e6c546c62"
	I1025 09:27:43.160444  169701 cri.go:89] found id: "1f115f72025a0c69095b7e23981b889c5f6b849f9233c4dc87b8320007c8dc3a"
	I1025 09:27:43.160450  169701 cri.go:89] found id: "ffc064adb3057dcbcb7e698a5374601d1a883faa933a2d0a24564611c6950319"
	I1025 09:27:43.160453  169701 cri.go:89] found id: ""
	I1025 09:27:43.160501  169701 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:27:43.174638  169701 out.go:203] 
	W1025 09:27:43.177515  169701 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:27:43Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:27:43Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:27:43.177538  169701 out.go:285] * 
	* 
	W1025 09:27:43.182361  169701 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:27:43.185393  169701 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-993166 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-993166
helpers_test.go:243: (dbg) docker inspect pause-993166:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7bccdbee8081e7e37c336e2d18e06b3364513944a6f5188fe19f1a3e39b4ceba",
	        "Created": "2025-10-25T09:24:19.396299953Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 160092,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:24:19.475634382Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/7bccdbee8081e7e37c336e2d18e06b3364513944a6f5188fe19f1a3e39b4ceba/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7bccdbee8081e7e37c336e2d18e06b3364513944a6f5188fe19f1a3e39b4ceba/hostname",
	        "HostsPath": "/var/lib/docker/containers/7bccdbee8081e7e37c336e2d18e06b3364513944a6f5188fe19f1a3e39b4ceba/hosts",
	        "LogPath": "/var/lib/docker/containers/7bccdbee8081e7e37c336e2d18e06b3364513944a6f5188fe19f1a3e39b4ceba/7bccdbee8081e7e37c336e2d18e06b3364513944a6f5188fe19f1a3e39b4ceba-json.log",
	        "Name": "/pause-993166",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-993166:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-993166",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7bccdbee8081e7e37c336e2d18e06b3364513944a6f5188fe19f1a3e39b4ceba",
	                "LowerDir": "/var/lib/docker/overlay2/8eccda200bd725632d5da0950b13432c512c56fd35b095d200ec60674c53a01f-init/diff:/var/lib/docker/overlay2/cef79fc16ca3e257f9c9390d34fae091a7f7fa219913b2606caf1922ea50ed93/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8eccda200bd725632d5da0950b13432c512c56fd35b095d200ec60674c53a01f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8eccda200bd725632d5da0950b13432c512c56fd35b095d200ec60674c53a01f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8eccda200bd725632d5da0950b13432c512c56fd35b095d200ec60674c53a01f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-993166",
	                "Source": "/var/lib/docker/volumes/pause-993166/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-993166",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-993166",
	                "name.minikube.sigs.k8s.io": "pause-993166",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "804c3cbdbf052b79f715b608308eb0293071fb77e39619d53ea16bce1a97767e",
	            "SandboxKey": "/var/run/docker/netns/804c3cbdbf05",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33023"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33024"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33027"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33025"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33026"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-993166": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:a2:2a:b6:4a:f5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "55cf5daa39d0de1da2c11e539b397ddf464a4874422d2addedf0dde2d66322ac",
	                    "EndpointID": "dbf238f9be11e8d1aa73fad1e689c8d56e14a1ca7d0b0108027d7ac16643fdae",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-993166",
	                        "7bccdbee8081"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-993166 -n pause-993166
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-993166 -n pause-993166: exit status 2 (378.678589ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-993166 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-993166 logs -n 25: (1.747020482s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-693294 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-693294       │ jenkins │ v1.37.0 │ 25 Oct 25 09:20 UTC │ 25 Oct 25 09:21 UTC │
	│ start   │ -p missing-upgrade-334875 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-334875    │ jenkins │ v1.37.0 │ 25 Oct 25 09:21 UTC │ 25 Oct 25 09:22 UTC │
	│ delete  │ -p NoKubernetes-693294                                                                                                                   │ NoKubernetes-693294       │ jenkins │ v1.37.0 │ 25 Oct 25 09:21 UTC │ 25 Oct 25 09:21 UTC │
	│ start   │ -p NoKubernetes-693294 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-693294       │ jenkins │ v1.37.0 │ 25 Oct 25 09:21 UTC │ 25 Oct 25 09:21 UTC │
	│ ssh     │ -p NoKubernetes-693294 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-693294       │ jenkins │ v1.37.0 │ 25 Oct 25 09:21 UTC │                     │
	│ stop    │ -p NoKubernetes-693294                                                                                                                   │ NoKubernetes-693294       │ jenkins │ v1.37.0 │ 25 Oct 25 09:21 UTC │ 25 Oct 25 09:21 UTC │
	│ start   │ -p NoKubernetes-693294 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-693294       │ jenkins │ v1.37.0 │ 25 Oct 25 09:21 UTC │ 25 Oct 25 09:22 UTC │
	│ ssh     │ -p NoKubernetes-693294 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-693294       │ jenkins │ v1.37.0 │ 25 Oct 25 09:22 UTC │                     │
	│ delete  │ -p NoKubernetes-693294                                                                                                                   │ NoKubernetes-693294       │ jenkins │ v1.37.0 │ 25 Oct 25 09:22 UTC │ 25 Oct 25 09:22 UTC │
	│ start   │ -p kubernetes-upgrade-707917 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-707917 │ jenkins │ v1.37.0 │ 25 Oct 25 09:22 UTC │ 25 Oct 25 09:22 UTC │
	│ delete  │ -p missing-upgrade-334875                                                                                                                │ missing-upgrade-334875    │ jenkins │ v1.37.0 │ 25 Oct 25 09:22 UTC │ 25 Oct 25 09:22 UTC │
	│ start   │ -p stopped-upgrade-971794 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-971794    │ jenkins │ v1.32.0 │ 25 Oct 25 09:22 UTC │ 25 Oct 25 09:22 UTC │
	│ stop    │ stopped-upgrade-971794 stop                                                                                                              │ stopped-upgrade-971794    │ jenkins │ v1.32.0 │ 25 Oct 25 09:22 UTC │ 25 Oct 25 09:22 UTC │
	│ stop    │ -p kubernetes-upgrade-707917                                                                                                             │ kubernetes-upgrade-707917 │ jenkins │ v1.37.0 │ 25 Oct 25 09:22 UTC │ 25 Oct 25 09:22 UTC │
	│ start   │ -p stopped-upgrade-971794 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-971794    │ jenkins │ v1.37.0 │ 25 Oct 25 09:22 UTC │ 25 Oct 25 09:23 UTC │
	│ start   │ -p kubernetes-upgrade-707917 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-707917 │ jenkins │ v1.37.0 │ 25 Oct 25 09:22 UTC │ 25 Oct 25 09:27 UTC │
	│ delete  │ -p stopped-upgrade-971794                                                                                                                │ stopped-upgrade-971794    │ jenkins │ v1.37.0 │ 25 Oct 25 09:23 UTC │ 25 Oct 25 09:23 UTC │
	│ start   │ -p running-upgrade-826823 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-826823    │ jenkins │ v1.32.0 │ 25 Oct 25 09:23 UTC │ 25 Oct 25 09:23 UTC │
	│ start   │ -p running-upgrade-826823 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-826823    │ jenkins │ v1.37.0 │ 25 Oct 25 09:23 UTC │ 25 Oct 25 09:24 UTC │
	│ delete  │ -p running-upgrade-826823                                                                                                                │ running-upgrade-826823    │ jenkins │ v1.37.0 │ 25 Oct 25 09:24 UTC │ 25 Oct 25 09:24 UTC │
	│ start   │ -p pause-993166 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-993166              │ jenkins │ v1.37.0 │ 25 Oct 25 09:24 UTC │ 25 Oct 25 09:25 UTC │
	│ start   │ -p pause-993166 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-993166              │ jenkins │ v1.37.0 │ 25 Oct 25 09:25 UTC │ 25 Oct 25 09:27 UTC │
	│ start   │ -p kubernetes-upgrade-707917 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                        │ kubernetes-upgrade-707917 │ jenkins │ v1.37.0 │ 25 Oct 25 09:27 UTC │                     │
	│ start   │ -p kubernetes-upgrade-707917 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-707917 │ jenkins │ v1.37.0 │ 25 Oct 25 09:27 UTC │                     │
	│ pause   │ -p pause-993166 --alsologtostderr -v=5                                                                                                   │ pause-993166              │ jenkins │ v1.37.0 │ 25 Oct 25 09:27 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:27:32
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:27:32.891590  168764 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:27:32.891757  168764 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:27:32.891785  168764 out.go:374] Setting ErrFile to fd 2...
	I1025 09:27:32.891806  168764 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:27:32.892065  168764 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-2312/.minikube/bin
	I1025 09:27:32.892455  168764 out.go:368] Setting JSON to false
	I1025 09:27:32.894107  168764 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4204,"bootTime":1761380249,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1025 09:27:32.894204  168764 start.go:141] virtualization:  
	I1025 09:27:32.898169  168764 out.go:179] * [kubernetes-upgrade-707917] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 09:27:32.902464  168764 notify.go:220] Checking for updates...
	I1025 09:27:32.903408  168764 out.go:179]   - MINIKUBE_LOCATION=21796
	I1025 09:27:32.906968  168764 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:27:32.909815  168764 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21796-2312/kubeconfig
	I1025 09:27:32.912571  168764 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-2312/.minikube
	I1025 09:27:32.915424  168764 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 09:27:32.918262  168764 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:27:32.921496  168764 config.go:182] Loaded profile config "kubernetes-upgrade-707917": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:27:32.922170  168764 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:27:32.959604  168764 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 09:27:32.959718  168764 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:27:33.030928  168764 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-25 09:27:33.021135037 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:27:33.031072  168764 docker.go:318] overlay module found
	I1025 09:27:33.034413  168764 out.go:179] * Using the docker driver based on existing profile
	I1025 09:27:33.037271  168764 start.go:305] selected driver: docker
	I1025 09:27:33.037292  168764 start.go:925] validating driver "docker" against &{Name:kubernetes-upgrade-707917 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-707917 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:27:33.037398  168764 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:27:33.038197  168764 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:27:33.096412  168764 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-25 09:27:33.086555858 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:27:33.096742  168764 cni.go:84] Creating CNI manager for ""
	I1025 09:27:33.096805  168764 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:27:33.096848  168764 start.go:349] cluster config:
	{Name:kubernetes-upgrade-707917 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-707917 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgen
tPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:27:33.100376  168764 out.go:179] * Starting "kubernetes-upgrade-707917" primary control-plane node in "kubernetes-upgrade-707917" cluster
	I1025 09:27:33.103514  168764 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 09:27:33.106582  168764 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:27:28.252043  164447 node_ready.go:49] node "pause-993166" is "Ready"
	I1025 09:27:28.252069  164447 node_ready.go:38] duration metric: took 9.880954173s for node "pause-993166" to be "Ready" ...
	I1025 09:27:28.252083  164447 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:27:28.252139  164447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:27:28.288361  164447 api_server.go:72] duration metric: took 10.120121128s to wait for apiserver process to appear ...
	I1025 09:27:28.288383  164447 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:27:28.288403  164447 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:27:28.446874  164447 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 09:27:28.446959  164447 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 09:27:28.789232  164447 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:27:28.821295  164447 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 09:27:28.821339  164447 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 09:27:29.288896  164447 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:27:29.311705  164447 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 09:27:29.311742  164447 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 09:27:29.789364  164447 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:27:29.797538  164447 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1025 09:27:29.798602  164447 api_server.go:141] control plane version: v1.34.1
	I1025 09:27:29.798627  164447 api_server.go:131] duration metric: took 1.510236323s to wait for apiserver health ...
	I1025 09:27:29.798636  164447 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 09:27:29.801883  164447 system_pods.go:59] 7 kube-system pods found
	I1025 09:27:29.801914  164447 system_pods.go:61] "coredns-66bc5c9577-jwhsz" [4f925e04-50bc-46af-9c96-c0ec0fb36a26] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:27:29.801923  164447 system_pods.go:61] "etcd-pause-993166" [60b9d7a1-fe2b-48f9-8a1c-fdb00fb49a7c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 09:27:29.801931  164447 system_pods.go:61] "kindnet-f8dj2" [0a881c6a-52f1-4a77-a887-6a1f589c8605] Running
	I1025 09:27:29.801940  164447 system_pods.go:61] "kube-apiserver-pause-993166" [a6ee0d1f-983c-4e73-a86f-b2d267ffd56f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 09:27:29.801952  164447 system_pods.go:61] "kube-controller-manager-pause-993166" [d6e20edc-31fe-453a-8bf0-d72e17fd0bda] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 09:27:29.801957  164447 system_pods.go:61] "kube-proxy-5rlkq" [18988bb4-0b22-4e84-99ed-c40fd8525128] Running
	I1025 09:27:29.801967  164447 system_pods.go:61] "kube-scheduler-pause-993166" [3afb80c3-fb10-43a9-b671-8c849cbd9786] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 09:27:29.802008  164447 system_pods.go:74] duration metric: took 3.365951ms to wait for pod list to return data ...
	I1025 09:27:29.802017  164447 default_sa.go:34] waiting for default service account to be created ...
	I1025 09:27:29.804486  164447 default_sa.go:45] found service account: "default"
	I1025 09:27:29.804508  164447 default_sa.go:55] duration metric: took 2.483063ms for default service account to be created ...
	I1025 09:27:29.804518  164447 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 09:27:29.807179  164447 system_pods.go:86] 7 kube-system pods found
	I1025 09:27:29.807208  164447 system_pods.go:89] "coredns-66bc5c9577-jwhsz" [4f925e04-50bc-46af-9c96-c0ec0fb36a26] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:27:29.807218  164447 system_pods.go:89] "etcd-pause-993166" [60b9d7a1-fe2b-48f9-8a1c-fdb00fb49a7c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 09:27:29.807224  164447 system_pods.go:89] "kindnet-f8dj2" [0a881c6a-52f1-4a77-a887-6a1f589c8605] Running
	I1025 09:27:29.807230  164447 system_pods.go:89] "kube-apiserver-pause-993166" [a6ee0d1f-983c-4e73-a86f-b2d267ffd56f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 09:27:29.807243  164447 system_pods.go:89] "kube-controller-manager-pause-993166" [d6e20edc-31fe-453a-8bf0-d72e17fd0bda] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 09:27:29.807250  164447 system_pods.go:89] "kube-proxy-5rlkq" [18988bb4-0b22-4e84-99ed-c40fd8525128] Running
	I1025 09:27:29.807256  164447 system_pods.go:89] "kube-scheduler-pause-993166" [3afb80c3-fb10-43a9-b671-8c849cbd9786] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 09:27:29.807266  164447 system_pods.go:126] duration metric: took 2.743146ms to wait for k8s-apps to be running ...
	I1025 09:27:29.807278  164447 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 09:27:29.807334  164447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:27:29.824001  164447 system_svc.go:56] duration metric: took 16.714542ms WaitForService to wait for kubelet
	I1025 09:27:29.824033  164447 kubeadm.go:586] duration metric: took 11.655797817s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:27:29.824054  164447 node_conditions.go:102] verifying NodePressure condition ...
	I1025 09:27:29.826792  164447 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 09:27:29.826832  164447 node_conditions.go:123] node cpu capacity is 2
	I1025 09:27:29.826844  164447 node_conditions.go:105] duration metric: took 2.784336ms to run NodePressure ...
	I1025 09:27:29.826857  164447 start.go:241] waiting for startup goroutines ...
	I1025 09:27:29.826866  164447 start.go:246] waiting for cluster config update ...
	I1025 09:27:29.826875  164447 start.go:255] writing updated cluster config ...
	I1025 09:27:29.827166  164447 ssh_runner.go:195] Run: rm -f paused
	I1025 09:27:29.830492  164447 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:27:29.831077  164447 kapi.go:59] client config for pause-993166: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21796-2312/.minikube/profiles/pause-993166/client.crt", KeyFile:"/home/jenkins/minikube-integration/21796-2312/.minikube/profiles/pause-993166/client.key", CAFile:"/home/jenkins/minikube-integration/21796-2312/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 09:27:29.835594  164447 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jwhsz" in "kube-system" namespace to be "Ready" or be gone ...
	W1025 09:27:31.849341  164447 pod_ready.go:104] pod "coredns-66bc5c9577-jwhsz" is not "Ready", error: node "pause-993166" hosting pod "coredns-66bc5c9577-jwhsz" is not "Ready" (will retry)
	I1025 09:27:33.109569  168764 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:27:33.109716  168764 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:27:33.109756  168764 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21796-2312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1025 09:27:33.109769  168764 cache.go:58] Caching tarball of preloaded images
	I1025 09:27:33.109840  168764 preload.go:233] Found /home/jenkins/minikube-integration/21796-2312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 09:27:33.109854  168764 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 09:27:33.109956  168764 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/kubernetes-upgrade-707917/config.json ...
	I1025 09:27:33.131327  168764 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 09:27:33.131350  168764 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 09:27:33.131368  168764 cache.go:232] Successfully downloaded all kic artifacts
	I1025 09:27:33.131390  168764 start.go:360] acquireMachinesLock for kubernetes-upgrade-707917: {Name:mkab74429f78f38bcfb1582561347a903c5ed810 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:27:33.131458  168764 start.go:364] duration metric: took 43.98µs to acquireMachinesLock for "kubernetes-upgrade-707917"
	I1025 09:27:33.131481  168764 start.go:96] Skipping create...Using existing machine configuration
	I1025 09:27:33.131489  168764 fix.go:54] fixHost starting: 
	I1025 09:27:33.131758  168764 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-707917 --format={{.State.Status}}
	I1025 09:27:33.151523  168764 fix.go:112] recreateIfNeeded on kubernetes-upgrade-707917: state=Running err=<nil>
	W1025 09:27:33.151573  168764 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 09:27:33.156795  168764 out.go:252] * Updating the running docker "kubernetes-upgrade-707917" container ...
	I1025 09:27:33.156832  168764 machine.go:93] provisionDockerMachine start ...
	I1025 09:27:33.156925  168764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-707917
	I1025 09:27:33.174508  168764 main.go:141] libmachine: Using SSH client type: native
	I1025 09:27:33.174835  168764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33013 <nil> <nil>}
	I1025 09:27:33.174850  168764 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:27:33.325544  168764 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-707917
	
	I1025 09:27:33.325574  168764 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-707917"
	I1025 09:27:33.325640  168764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-707917
	I1025 09:27:33.346341  168764 main.go:141] libmachine: Using SSH client type: native
	I1025 09:27:33.346664  168764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33013 <nil> <nil>}
	I1025 09:27:33.346683  168764 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-707917 && echo "kubernetes-upgrade-707917" | sudo tee /etc/hostname
	I1025 09:27:33.529099  168764 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-707917
	
	I1025 09:27:33.529271  168764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-707917
	I1025 09:27:33.548216  168764 main.go:141] libmachine: Using SSH client type: native
	I1025 09:27:33.548535  168764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33013 <nil> <nil>}
	I1025 09:27:33.548555  168764 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-707917' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-707917/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-707917' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:27:33.706378  168764 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:27:33.706400  168764 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21796-2312/.minikube CaCertPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21796-2312/.minikube}
	I1025 09:27:33.706426  168764 ubuntu.go:190] setting up certificates
	I1025 09:27:33.706436  168764 provision.go:84] configureAuth start
	I1025 09:27:33.706516  168764 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-707917
	I1025 09:27:33.725361  168764 provision.go:143] copyHostCerts
	I1025 09:27:33.725435  168764 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-2312/.minikube/ca.pem, removing ...
	I1025 09:27:33.725453  168764 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-2312/.minikube/ca.pem
	I1025 09:27:33.725536  168764 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21796-2312/.minikube/ca.pem (1082 bytes)
	I1025 09:27:33.725651  168764 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-2312/.minikube/cert.pem, removing ...
	I1025 09:27:33.725662  168764 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-2312/.minikube/cert.pem
	I1025 09:27:33.725690  168764 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21796-2312/.minikube/cert.pem (1123 bytes)
	I1025 09:27:33.725757  168764 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-2312/.minikube/key.pem, removing ...
	I1025 09:27:33.725765  168764 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-2312/.minikube/key.pem
	I1025 09:27:33.725791  168764 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21796-2312/.minikube/key.pem (1679 bytes)
	I1025 09:27:33.725851  168764 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21796-2312/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-707917 san=[127.0.0.1 192.168.76.2 kubernetes-upgrade-707917 localhost minikube]
	I1025 09:27:34.801296  168764 provision.go:177] copyRemoteCerts
	I1025 09:27:34.801362  168764 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:27:34.801404  168764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-707917
	I1025 09:27:34.819152  168764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/kubernetes-upgrade-707917/id_rsa Username:docker}
	I1025 09:27:34.930880  168764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1025 09:27:34.967702  168764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 09:27:34.997067  168764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 09:27:35.031425  168764 provision.go:87] duration metric: took 1.324965911s to configureAuth
	I1025 09:27:35.031452  168764 ubuntu.go:206] setting minikube options for container-runtime
	I1025 09:27:35.031638  168764 config.go:182] Loaded profile config "kubernetes-upgrade-707917": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:27:35.031753  168764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-707917
	I1025 09:27:35.051684  168764 main.go:141] libmachine: Using SSH client type: native
	I1025 09:27:35.052004  168764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33013 <nil> <nil>}
	I1025 09:27:35.052019  168764 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 09:27:35.738732  168764 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:27:35.738814  168764 machine.go:96] duration metric: took 2.581973416s to provisionDockerMachine
	I1025 09:27:35.738840  168764 start.go:293] postStartSetup for "kubernetes-upgrade-707917" (driver="docker")
	I1025 09:27:35.738882  168764 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:27:35.738972  168764 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:27:35.739050  168764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-707917
	I1025 09:27:35.757072  168764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/kubernetes-upgrade-707917/id_rsa Username:docker}
	I1025 09:27:35.880985  168764 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:27:35.888831  168764 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 09:27:35.888857  168764 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 09:27:35.888869  168764 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-2312/.minikube/addons for local assets ...
	I1025 09:27:35.888931  168764 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-2312/.minikube/files for local assets ...
	I1025 09:27:35.889016  168764 filesync.go:149] local asset: /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem -> 41102.pem in /etc/ssl/certs
	I1025 09:27:35.889124  168764 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 09:27:35.908103  168764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem --> /etc/ssl/certs/41102.pem (1708 bytes)
	I1025 09:27:35.933642  168764 start.go:296] duration metric: took 194.757044ms for postStartSetup
	I1025 09:27:35.933722  168764 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:27:35.933795  168764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-707917
	I1025 09:27:35.953815  168764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/kubernetes-upgrade-707917/id_rsa Username:docker}
	I1025 09:27:36.072059  168764 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 09:27:36.077281  168764 fix.go:56] duration metric: took 2.945784566s for fixHost
	I1025 09:27:36.077307  168764 start.go:83] releasing machines lock for "kubernetes-upgrade-707917", held for 2.945837867s
	I1025 09:27:36.077377  168764 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-707917
	I1025 09:27:36.095960  168764 ssh_runner.go:195] Run: cat /version.json
	I1025 09:27:36.096023  168764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-707917
	I1025 09:27:36.096272  168764 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:27:36.096325  168764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-707917
	I1025 09:27:36.125902  168764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/kubernetes-upgrade-707917/id_rsa Username:docker}
	I1025 09:27:36.128657  168764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/kubernetes-upgrade-707917/id_rsa Username:docker}
	I1025 09:27:36.230153  168764 ssh_runner.go:195] Run: systemctl --version
	I1025 09:27:36.324455  168764 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:27:36.380993  168764 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:27:36.386053  168764 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:27:36.386129  168764 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:27:36.394590  168764 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 09:27:36.394614  168764 start.go:495] detecting cgroup driver to use...
	I1025 09:27:36.394675  168764 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 09:27:36.394736  168764 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:27:36.411215  168764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:27:36.424808  168764 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:27:36.424871  168764 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:27:36.441217  168764 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:27:36.454509  168764 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:27:36.592165  168764 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:27:36.809240  168764 docker.go:234] disabling docker service ...
	I1025 09:27:36.809354  168764 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:27:36.827909  168764 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:27:36.861034  168764 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:27:37.099844  168764 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:27:37.365477  168764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:27:37.380847  168764 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:27:37.415537  168764 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 09:27:37.415659  168764 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:27:37.428454  168764 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 09:27:37.428606  168764 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:27:37.441827  168764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:27:37.453214  168764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:27:37.471430  168764 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:27:37.483258  168764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:27:37.500465  168764 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:27:37.508958  168764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:27:37.518116  168764 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:27:37.532416  168764 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:27:37.542055  168764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:27:37.767533  168764 ssh_runner.go:195] Run: sudo systemctl restart crio
	W1025 09:27:34.354072  164447 pod_ready.go:104] pod "coredns-66bc5c9577-jwhsz" is not "Ready", error: node "pause-993166" hosting pod "coredns-66bc5c9577-jwhsz" is not "Ready" (will retry)
	W1025 09:27:36.842581  164447 pod_ready.go:104] pod "coredns-66bc5c9577-jwhsz" is not "Ready", error: node "pause-993166" hosting pod "coredns-66bc5c9577-jwhsz" is not "Ready" (will retry)
	I1025 09:27:38.030416  168764 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:27:38.030530  168764 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:27:38.034746  168764 start.go:563] Will wait 60s for crictl version
	I1025 09:27:38.034867  168764 ssh_runner.go:195] Run: which crictl
	I1025 09:27:38.039303  168764 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 09:27:38.072031  168764 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 09:27:38.072116  168764 ssh_runner.go:195] Run: crio --version
	I1025 09:27:38.104902  168764 ssh_runner.go:195] Run: crio --version
	I1025 09:27:38.138366  168764 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 09:27:38.141396  168764 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-707917 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:27:38.158375  168764 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1025 09:27:38.162448  168764 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-707917 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-707917 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 09:27:38.162569  168764 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:27:38.162623  168764 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:27:38.197058  168764 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:27:38.197081  168764 crio.go:433] Images already preloaded, skipping extraction
	I1025 09:27:38.197138  168764 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:27:38.224479  168764 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:27:38.224504  168764 cache_images.go:85] Images are preloaded, skipping loading
	I1025 09:27:38.224511  168764 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1025 09:27:38.224624  168764 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kubernetes-upgrade-707917 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-707917 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 09:27:38.224708  168764 ssh_runner.go:195] Run: crio config
	I1025 09:27:38.290926  168764 cni.go:84] Creating CNI manager for ""
	I1025 09:27:38.290950  168764 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:27:38.290967  168764 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 09:27:38.290994  168764 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-707917 NodeName:kubernetes-upgrade-707917 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:27:38.291125  168764 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-707917"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:27:38.291201  168764 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 09:27:38.298840  168764 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 09:27:38.298928  168764 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:27:38.306236  168764 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1025 09:27:38.319058  168764 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:27:38.332369  168764 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1025 09:27:38.347649  168764 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1025 09:27:38.351285  168764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:27:38.487365  168764 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:27:38.500831  168764 certs.go:69] Setting up /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/kubernetes-upgrade-707917 for IP: 192.168.76.2
	I1025 09:27:38.500857  168764 certs.go:195] generating shared ca certs ...
	I1025 09:27:38.500873  168764 certs.go:227] acquiring lock for ca certs: {Name:mke81a91ad41248e67f7acba9fb2e71e1c110e7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:27:38.501050  168764 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21796-2312/.minikube/ca.key
	I1025 09:27:38.501117  168764 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.key
	I1025 09:27:38.501132  168764 certs.go:257] generating profile certs ...
	I1025 09:27:38.501238  168764 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/kubernetes-upgrade-707917/client.key
	I1025 09:27:38.501317  168764 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/kubernetes-upgrade-707917/apiserver.key.b64a597d
	I1025 09:27:38.501385  168764 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/kubernetes-upgrade-707917/proxy-client.key
	I1025 09:27:38.501530  168764 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/4110.pem (1338 bytes)
	W1025 09:27:38.501577  168764 certs.go:480] ignoring /home/jenkins/minikube-integration/21796-2312/.minikube/certs/4110_empty.pem, impossibly tiny 0 bytes
	I1025 09:27:38.501593  168764 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 09:27:38.501623  168764 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem (1082 bytes)
	I1025 09:27:38.501684  168764 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:27:38.501719  168764 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/key.pem (1679 bytes)
	I1025 09:27:38.501794  168764 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem (1708 bytes)
	I1025 09:27:38.502463  168764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:27:38.524933  168764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 09:27:38.543701  168764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:27:38.561569  168764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 09:27:38.584306  168764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/kubernetes-upgrade-707917/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1025 09:27:38.604137  168764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/kubernetes-upgrade-707917/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 09:27:38.622933  168764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/kubernetes-upgrade-707917/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:27:38.641931  168764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/kubernetes-upgrade-707917/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 09:27:38.665568  168764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem --> /usr/share/ca-certificates/41102.pem (1708 bytes)
	I1025 09:27:38.688722  168764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:27:38.710736  168764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/certs/4110.pem --> /usr/share/ca-certificates/4110.pem (1338 bytes)
	I1025 09:27:38.732601  168764 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:27:38.745883  168764 ssh_runner.go:195] Run: openssl version
	I1025 09:27:38.752566  168764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41102.pem && ln -fs /usr/share/ca-certificates/41102.pem /etc/ssl/certs/41102.pem"
	I1025 09:27:38.761464  168764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41102.pem
	I1025 09:27:38.765442  168764 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 08:37 /usr/share/ca-certificates/41102.pem
	I1025 09:27:38.765582  168764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41102.pem
	I1025 09:27:38.815569  168764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41102.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 09:27:38.825097  168764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:27:38.834281  168764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:27:38.844801  168764 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:27:38.844906  168764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:27:38.895759  168764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:27:38.903631  168764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4110.pem && ln -fs /usr/share/ca-certificates/4110.pem /etc/ssl/certs/4110.pem"
	I1025 09:27:38.912299  168764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4110.pem
	I1025 09:27:38.916211  168764 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 08:37 /usr/share/ca-certificates/4110.pem
	I1025 09:27:38.916304  168764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4110.pem
	I1025 09:27:38.965393  168764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4110.pem /etc/ssl/certs/51391683.0"
	I1025 09:27:38.973658  168764 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:27:38.977624  168764 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 09:27:39.019776  168764 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 09:27:39.061821  168764 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 09:27:39.103497  168764 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 09:27:39.144584  168764 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 09:27:39.191110  168764 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 09:27:39.232176  168764 kubeadm.go:400] StartCluster: {Name:kubernetes-upgrade-707917 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-707917 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:27:39.232266  168764 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:27:39.232335  168764 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:27:39.260233  168764 cri.go:89] found id: "bf4e471a5b488ca7cf28a3bc04eddff89e1632d40f77c4d271f14472ef7129c8"
	I1025 09:27:39.260301  168764 cri.go:89] found id: "155b666bd5611f0afd2e8ea02266eb6228ed2dd5ce8d29a9de9103d35269a59d"
	I1025 09:27:39.260313  168764 cri.go:89] found id: "e14be6f20c6a4d1308920a126fe347bd13f8e503cc80839d9dd18e6dcc0f1dfa"
	I1025 09:27:39.260318  168764 cri.go:89] found id: "ecd9b68ee75ca47b8f0c237863e410b4bebd1838dd0683b343a482379667adcf"
	I1025 09:27:39.260321  168764 cri.go:89] found id: ""
	I1025 09:27:39.260391  168764 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 09:27:39.271411  168764 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:27:39Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:27:39.271543  168764 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 09:27:39.279329  168764 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 09:27:39.279355  168764 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 09:27:39.279406  168764 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 09:27:39.286441  168764 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 09:27:39.287134  168764 kubeconfig.go:125] found "kubernetes-upgrade-707917" server: "https://192.168.76.2:8443"
	I1025 09:27:39.288015  168764 kapi.go:59] client config for kubernetes-upgrade-707917: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21796-2312/.minikube/profiles/kubernetes-upgrade-707917/client.crt", KeyFile:"/home/jenkins/minikube-integration/21796-2312/.minikube/profiles/kubernetes-upgrade-707917/client.key", CAFile:"/home/jenkins/minikube-integration/21796-2312/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 09:27:39.288516  168764 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1025 09:27:39.288534  168764 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1025 09:27:39.288539  168764 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1025 09:27:39.288544  168764 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1025 09:27:39.288549  168764 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1025 09:27:39.288855  168764 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 09:27:39.296226  168764 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1025 09:27:39.296299  168764 kubeadm.go:601] duration metric: took 16.937074ms to restartPrimaryControlPlane
	I1025 09:27:39.296314  168764 kubeadm.go:402] duration metric: took 64.147365ms to StartCluster
	I1025 09:27:39.296330  168764 settings.go:142] acquiring lock: {Name:mk0d8a31751f5e359928da7d7271367bd4b397fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:27:39.296392  168764 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21796-2312/kubeconfig
	I1025 09:27:39.297306  168764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/kubeconfig: {Name:mk29749a0edbb941c188588ea6aef25a517ab571 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:27:39.297534  168764 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:27:39.297831  168764 config.go:182] Loaded profile config "kubernetes-upgrade-707917": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:27:39.297881  168764 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 09:27:39.297945  168764 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-707917"
	I1025 09:27:39.297958  168764 addons.go:238] Setting addon storage-provisioner=true in "kubernetes-upgrade-707917"
	W1025 09:27:39.297968  168764 addons.go:247] addon storage-provisioner should already be in state true
	I1025 09:27:39.298248  168764 host.go:66] Checking if "kubernetes-upgrade-707917" exists ...
	I1025 09:27:39.298402  168764 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-707917"
	I1025 09:27:39.298425  168764 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-707917"
	I1025 09:27:39.298719  168764 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-707917 --format={{.State.Status}}
	I1025 09:27:39.298832  168764 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-707917 --format={{.State.Status}}
	I1025 09:27:39.302164  168764 out.go:179] * Verifying Kubernetes components...
	I1025 09:27:39.305311  168764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:27:39.332903  168764 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 09:27:38.842409  164447 pod_ready.go:94] pod "coredns-66bc5c9577-jwhsz" is "Ready"
	I1025 09:27:38.842434  164447 pod_ready.go:86] duration metric: took 9.006815657s for pod "coredns-66bc5c9577-jwhsz" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:27:38.849277  164447 pod_ready.go:83] waiting for pod "etcd-pause-993166" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:27:38.858343  164447 pod_ready.go:94] pod "etcd-pause-993166" is "Ready"
	I1025 09:27:38.858419  164447 pod_ready.go:86] duration metric: took 9.117254ms for pod "etcd-pause-993166" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:27:38.861462  164447 pod_ready.go:83] waiting for pod "kube-apiserver-pause-993166" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:27:38.869814  164447 pod_ready.go:94] pod "kube-apiserver-pause-993166" is "Ready"
	I1025 09:27:38.869837  164447 pod_ready.go:86] duration metric: took 8.338467ms for pod "kube-apiserver-pause-993166" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:27:38.874825  164447 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-993166" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:27:39.879906  164447 pod_ready.go:94] pod "kube-controller-manager-pause-993166" is "Ready"
	I1025 09:27:39.879936  164447 pod_ready.go:86] duration metric: took 1.005088963s for pod "kube-controller-manager-pause-993166" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:27:40.040748  164447 pod_ready.go:83] waiting for pod "kube-proxy-5rlkq" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:27:40.439227  164447 pod_ready.go:94] pod "kube-proxy-5rlkq" is "Ready"
	I1025 09:27:40.439259  164447 pod_ready.go:86] duration metric: took 398.474514ms for pod "kube-proxy-5rlkq" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:27:40.639320  164447 pod_ready.go:83] waiting for pod "kube-scheduler-pause-993166" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:27:41.039091  164447 pod_ready.go:94] pod "kube-scheduler-pause-993166" is "Ready"
	I1025 09:27:41.039118  164447 pod_ready.go:86] duration metric: took 399.771119ms for pod "kube-scheduler-pause-993166" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:27:41.039129  164447 pod_ready.go:40] duration metric: took 11.208606628s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:27:41.096308  164447 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1025 09:27:41.099813  164447 out.go:179] * Done! kubectl is now configured to use "pause-993166" cluster and "default" namespace by default
	I1025 09:27:39.335897  168764 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:27:39.335918  168764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 09:27:39.335984  168764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-707917
	I1025 09:27:39.338560  168764 kapi.go:59] client config for kubernetes-upgrade-707917: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21796-2312/.minikube/profiles/kubernetes-upgrade-707917/client.crt", KeyFile:"/home/jenkins/minikube-integration/21796-2312/.minikube/profiles/kubernetes-upgrade-707917/client.key", CAFile:"/home/jenkins/minikube-integration/21796-2312/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 09:27:39.338870  168764 addons.go:238] Setting addon default-storageclass=true in "kubernetes-upgrade-707917"
	W1025 09:27:39.338882  168764 addons.go:247] addon default-storageclass should already be in state true
	I1025 09:27:39.338906  168764 host.go:66] Checking if "kubernetes-upgrade-707917" exists ...
	I1025 09:27:39.339407  168764 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-707917 --format={{.State.Status}}
	I1025 09:27:39.361806  168764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/kubernetes-upgrade-707917/id_rsa Username:docker}
	I1025 09:27:39.387383  168764 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 09:27:39.387403  168764 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 09:27:39.387462  168764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-707917
	I1025 09:27:39.415855  168764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/kubernetes-upgrade-707917/id_rsa Username:docker}
	I1025 09:27:39.521534  168764 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:27:39.536028  168764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:27:39.542101  168764 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:27:39.542171  168764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:27:39.574095  168764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1025 09:27:39.683676  168764 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:27:39.683834  168764 retry.go:31] will retry after 315.482482ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1025 09:27:39.694343  168764 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:27:39.694376  168764 retry.go:31] will retry after 328.324201ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:27:39.999687  168764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:27:40.023481  168764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1025 09:27:40.043004  168764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 09:27:40.097371  168764 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:27:40.097405  168764 retry.go:31] will retry after 196.667511ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1025 09:27:40.127483  168764 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:27:40.127512  168764 retry.go:31] will retry after 192.743774ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:27:40.294622  168764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:27:40.321104  168764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1025 09:27:40.371641  168764 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:27:40.371670  168764 retry.go:31] will retry after 842.110478ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1025 09:27:40.408990  168764 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:27:40.409071  168764 retry.go:31] will retry after 836.571877ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:27:40.543244  168764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:27:41.043180  168764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:27:41.214944  168764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:27:41.247757  168764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1025 09:27:41.340738  168764 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:27:41.340765  168764 retry.go:31] will retry after 651.31665ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1025 09:27:41.412920  168764 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:27:41.412947  168764 retry.go:31] will retry after 657.241675ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:27:41.542267  168764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:27:41.993189  168764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:27:42.042743  168764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:27:42.070914  168764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1025 09:27:42.078299  168764 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:27:42.078332  168764 retry.go:31] will retry after 1.439824709s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1025 09:27:42.185661  168764 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:27:42.185691  168764 retry.go:31] will retry after 1.2616049s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:27:42.543169  168764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	
	
	==> CRI-O <==
	Oct 25 09:27:20 pause-993166 crio[2170]: time="2025-10-25T09:27:20.571475058Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:27:20 pause-993166 crio[2170]: time="2025-10-25T09:27:20.591663535Z" level=info msg="Starting container: e8b340e357e7745dcfdd28f2f2837779d619fbcc48c0299f33f01bc4c4338c4d" id=2b83c092-01fb-4d69-be46-9346dd617a5e name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:27:20 pause-993166 crio[2170]: time="2025-10-25T09:27:20.636297722Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:27:20 pause-993166 crio[2170]: time="2025-10-25T09:27:20.636676067Z" level=info msg="Started container" PID=2442 containerID=e8b340e357e7745dcfdd28f2f2837779d619fbcc48c0299f33f01bc4c4338c4d description=kube-system/coredns-66bc5c9577-jwhsz/coredns id=2b83c092-01fb-4d69-be46-9346dd617a5e name=/runtime.v1.RuntimeService/StartContainer sandboxID=4c6fd4ba5f139a1b04ce511c15c027a73c5ba3028c019e7ade5fd60b7ce1a37a
	Oct 25 09:27:20 pause-993166 crio[2170]: time="2025-10-25T09:27:20.638270106Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:27:20 pause-993166 crio[2170]: time="2025-10-25T09:27:20.662801971Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:27:20 pause-993166 crio[2170]: time="2025-10-25T09:27:20.663731235Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:27:20 pause-993166 crio[2170]: time="2025-10-25T09:27:20.688348164Z" level=info msg="Created container 714664c95ab9237844e38f812333c61dd85473b8f3a9fe85af446cf295917418: kube-system/kube-controller-manager-pause-993166/kube-controller-manager" id=df4c1292-56f9-49ec-b58a-479146e4065e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:27:20 pause-993166 crio[2170]: time="2025-10-25T09:27:20.689415219Z" level=info msg="Created container 47fe39f906d8d7285850fca5853bc4537e57f459e57fd794878c97272cbeb938: kube-system/etcd-pause-993166/etcd" id=fb08dc13-424d-4bf1-b975-b52909b999b5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:27:20 pause-993166 crio[2170]: time="2025-10-25T09:27:20.701098577Z" level=info msg="Starting container: 714664c95ab9237844e38f812333c61dd85473b8f3a9fe85af446cf295917418" id=2168a84e-f250-4fd0-9c60-98d2d4bad4f8 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:27:20 pause-993166 crio[2170]: time="2025-10-25T09:27:20.706915845Z" level=info msg="Starting container: 47fe39f906d8d7285850fca5853bc4537e57f459e57fd794878c97272cbeb938" id=47508fdf-ff90-4881-ac47-bf3196c18105 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:27:20 pause-993166 crio[2170]: time="2025-10-25T09:27:20.742221071Z" level=info msg="Started container" PID=2456 containerID=714664c95ab9237844e38f812333c61dd85473b8f3a9fe85af446cf295917418 description=kube-system/kube-controller-manager-pause-993166/kube-controller-manager id=2168a84e-f250-4fd0-9c60-98d2d4bad4f8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=17e9c9e4ea58af7945e7a41fb24e810eceb7da98f44acf722ec6f142324d155c
	Oct 25 09:27:20 pause-993166 crio[2170]: time="2025-10-25T09:27:20.751317864Z" level=info msg="Started container" PID=2460 containerID=47fe39f906d8d7285850fca5853bc4537e57f459e57fd794878c97272cbeb938 description=kube-system/etcd-pause-993166/etcd id=47508fdf-ff90-4881-ac47-bf3196c18105 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1015f4ce15049980ccb0a081cb52ffbea7514beced9d0552da94ce0d0e3ea3a4
	Oct 25 09:27:20 pause-993166 crio[2170]: time="2025-10-25T09:27:20.772189931Z" level=info msg="Created container 886db0afca69285c74060f184d80584ce967a64cc5c10a575ddd4e8bee524b4c: kube-system/kube-scheduler-pause-993166/kube-scheduler" id=fed59f04-ada5-4cfa-9134-36890c70b8d0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:27:20 pause-993166 crio[2170]: time="2025-10-25T09:27:20.774546174Z" level=info msg="Starting container: 886db0afca69285c74060f184d80584ce967a64cc5c10a575ddd4e8bee524b4c" id=de4bf034-e9d8-43c8-9f60-51b50a5a0fac name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:27:20 pause-993166 crio[2170]: time="2025-10-25T09:27:20.776412972Z" level=info msg="Started container" PID=2472 containerID=886db0afca69285c74060f184d80584ce967a64cc5c10a575ddd4e8bee524b4c description=kube-system/kube-scheduler-pause-993166/kube-scheduler id=de4bf034-e9d8-43c8-9f60-51b50a5a0fac name=/runtime.v1.RuntimeService/StartContainer sandboxID=c16a2000d5672fb102bfe80a3520b2ce21374b8e6400f4fdb805e97a98f3dcfe
	Oct 25 09:27:21 pause-993166 crio[2170]: time="2025-10-25T09:27:21.106214409Z" level=info msg="Created container 2ca61daa9f640085335cff12d469a97aac33ed0ac86bb44265dd873b1b88ea7b: kube-system/kube-proxy-5rlkq/kube-proxy" id=9502902d-3836-46fd-9e20-48f819e015e0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:27:21 pause-993166 crio[2170]: time="2025-10-25T09:27:21.110244199Z" level=info msg="Starting container: 2ca61daa9f640085335cff12d469a97aac33ed0ac86bb44265dd873b1b88ea7b" id=070a4e77-d7e7-4a0f-b1c6-851d4e16b15a name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:27:21 pause-993166 crio[2170]: time="2025-10-25T09:27:21.118235249Z" level=info msg="Started container" PID=2477 containerID=2ca61daa9f640085335cff12d469a97aac33ed0ac86bb44265dd873b1b88ea7b description=kube-system/kube-proxy-5rlkq/kube-proxy id=070a4e77-d7e7-4a0f-b1c6-851d4e16b15a name=/runtime.v1.RuntimeService/StartContainer sandboxID=a86557571f7c803429c813e1c340799b2c30df8af0b9521663450da16d0b48c2
	Oct 25 09:27:25 pause-993166 crio[2170]: time="2025-10-25T09:27:25.2349325Z" level=info msg="Removing container: 87c5a3facf18119b6c304958a3ed79367860f43b651be0b4a9b1c500597d43cd" id=dc5a2620-211d-45b1-a614-f4edf4cc7a00 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 09:27:25 pause-993166 crio[2170]: time="2025-10-25T09:27:25.260179526Z" level=info msg="Removed container 87c5a3facf18119b6c304958a3ed79367860f43b651be0b4a9b1c500597d43cd: kube-system/kube-apiserver-pause-993166/kube-apiserver" id=dc5a2620-211d-45b1-a614-f4edf4cc7a00 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 09:27:25 pause-993166 crio[2170]: time="2025-10-25T09:27:25.271110246Z" level=info msg="Removing container: f563b86847be715f70af152a0fbab4b59318f54cbe306e06323d0f39adeec6dd" id=c720bbec-8e94-40e8-ac62-da78c07d2852 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 09:27:25 pause-993166 crio[2170]: time="2025-10-25T09:27:25.301708235Z" level=info msg="Removed container f563b86847be715f70af152a0fbab4b59318f54cbe306e06323d0f39adeec6dd: kube-system/etcd-pause-993166/etcd" id=c720bbec-8e94-40e8-ac62-da78c07d2852 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 09:27:25 pause-993166 crio[2170]: time="2025-10-25T09:27:25.320235585Z" level=info msg="Removing container: 169848acc3e0cecb70686ebdfdf92cf360efc3648f67252350f18a44c7a011cd" id=ccba1f3d-c6e8-44b2-8ae7-c91fe115e99a name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 09:27:25 pause-993166 crio[2170]: time="2025-10-25T09:27:25.339375049Z" level=info msg="Removed container 169848acc3e0cecb70686ebdfdf92cf360efc3648f67252350f18a44c7a011cd: kube-system/kube-controller-manager-pause-993166/kube-controller-manager" id=ccba1f3d-c6e8-44b2-8ae7-c91fe115e99a name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	2ca61daa9f640       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   23 seconds ago       Running             kube-proxy                1                   a86557571f7c8       kube-proxy-5rlkq                       kube-system
	886db0afca692       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   23 seconds ago       Running             kube-scheduler            2                   c16a2000d5672       kube-scheduler-pause-993166            kube-system
	47fe39f906d8d       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   23 seconds ago       Running             etcd                      2                   1015f4ce15049       etcd-pause-993166                      kube-system
	714664c95ab92       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   23 seconds ago       Running             kube-controller-manager   2                   17e9c9e4ea58a       kube-controller-manager-pause-993166   kube-system
	e8b340e357e77       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   23 seconds ago       Running             coredns                   2                   4c6fd4ba5f139       coredns-66bc5c9577-jwhsz               kube-system
	8ffe0ddc38b5c       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   23 seconds ago       Running             kube-apiserver            2                   9c54e466d03dc       kube-apiserver-pause-993166            kube-system
	76e14f6fbf01f       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   About a minute ago   Created             coredns                   1                   4c6fd4ba5f139       coredns-66bc5c9577-jwhsz               kube-system
	2096c30fd3aa9       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Running             kindnet-cni               1                   ca4d7c18a9522       kindnet-f8dj2                          kube-system
	8c0bc05f35e5c       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            1                   9c54e466d03dc       kube-apiserver-pause-993166            kube-system
	f33ce8d9cda08       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Created             kube-scheduler            1                   c16a2000d5672       kube-scheduler-pause-993166            kube-system
	69577785cdf00       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   1                   17e9c9e4ea58a       kube-controller-manager-pause-993166   kube-system
	0bcb1e61aa495       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      1                   1015f4ce15049       etcd-pause-993166                      kube-system
	eef32253d3c56       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   2 minutes ago        Exited              coredns                   0                   4c6fd4ba5f139       coredns-66bc5c9577-jwhsz               kube-system
	0e45c49e84a25       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   2 minutes ago        Exited              kindnet-cni               0                   ca4d7c18a9522       kindnet-f8dj2                          kube-system
	1f115f72025a0       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   2 minutes ago        Exited              kube-proxy                0                   a86557571f7c8       kube-proxy-5rlkq                       kube-system
	ffc064adb3057       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   3 minutes ago        Exited              kube-scheduler            0                   c16a2000d5672       kube-scheduler-pause-993166            kube-system
	
	
	==> coredns [76e14f6fbf01f85f84b7dbe2758815257045635f805bae17697c1057870d2e45] <==
	
	
	==> coredns [e8b340e357e7745dcfdd28f2f2837779d619fbcc48c0299f33f01bc4c4338c4d] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34034 - 31432 "HINFO IN 3128138462265537033.2385866799071473184. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024969278s
	
	
	==> coredns [eef32253d3c56e41d04f3cfb281703e63313570f9ae5713544b1d85e07c65fdb] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40878 - 41302 "HINFO IN 4088001627319950776.616337305452426126. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.010883658s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-993166
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-993166
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3de5ce1075e776fa55a26fe8396669cc53a4373
	                    minikube.k8s.io/name=pause-993166
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_24_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:24:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-993166
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:27:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:27:38 +0000   Sat, 25 Oct 2025 09:24:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:27:38 +0000   Sat, 25 Oct 2025 09:24:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:27:38 +0000   Sat, 25 Oct 2025 09:24:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:27:38 +0000   Sat, 25 Oct 2025 09:27:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-993166
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                b714354e-bb8e-4060-a76c-5fcc136f8956
	  Boot ID:                    89aa6167-e700-462e-8969-7739a9f33dc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-jwhsz                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m51s
	  kube-system                 etcd-pause-993166                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m56s
	  kube-system                 kindnet-f8dj2                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m51s
	  kube-system                 kube-apiserver-pause-993166             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m56s
	  kube-system                 kube-controller-manager-pause-993166    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m56s
	  kube-system                 kube-proxy-5rlkq                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m51s
	  kube-system                 kube-scheduler-pause-993166             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 2m50s                kube-proxy       
	  Normal   Starting                 15s                  kube-proxy       
	  Warning  CgroupV1                 3m5s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  3m5s (x8 over 3m5s)  kubelet          Node pause-993166 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m5s (x8 over 3m5s)  kubelet          Node pause-993166 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m5s (x8 over 3m5s)  kubelet          Node pause-993166 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m57s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m57s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m56s                kubelet          Node pause-993166 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m56s                kubelet          Node pause-993166 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m56s                kubelet          Node pause-993166 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m52s                node-controller  Node pause-993166 event: Registered Node pause-993166 in Controller
	  Warning  ContainerGCFailed        57s (x2 over 117s)   kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             23s (x8 over 95s)    kubelet          Node pause-993166 status is now: NodeNotReady
	  Normal   RegisteredNode           13s                  node-controller  Node pause-993166 event: Registered Node pause-993166 in Controller
	  Normal   NodeReady                6s (x2 over 2m10s)   kubelet          Node pause-993166 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct25 08:59] overlayfs: idmapped layers are currently not supported
	[Oct25 09:00] overlayfs: idmapped layers are currently not supported
	[  +5.088983] overlayfs: idmapped layers are currently not supported
	[ +51.199451] overlayfs: idmapped layers are currently not supported
	[Oct25 09:01] overlayfs: idmapped layers are currently not supported
	[Oct25 09:02] overlayfs: idmapped layers are currently not supported
	[Oct25 09:07] overlayfs: idmapped layers are currently not supported
	[Oct25 09:08] overlayfs: idmapped layers are currently not supported
	[Oct25 09:09] overlayfs: idmapped layers are currently not supported
	[Oct25 09:10] overlayfs: idmapped layers are currently not supported
	[Oct25 09:11] overlayfs: idmapped layers are currently not supported
	[Oct25 09:13] overlayfs: idmapped layers are currently not supported
	[ +18.632418] overlayfs: idmapped layers are currently not supported
	[ +13.271347] overlayfs: idmapped layers are currently not supported
	[Oct25 09:14] overlayfs: idmapped layers are currently not supported
	[Oct25 09:15] overlayfs: idmapped layers are currently not supported
	[ +40.253798] overlayfs: idmapped layers are currently not supported
	[Oct25 09:16] overlayfs: idmapped layers are currently not supported
	[Oct25 09:17] overlayfs: idmapped layers are currently not supported
	[Oct25 09:18] overlayfs: idmapped layers are currently not supported
	[  +9.191019] hrtimer: interrupt took 13462946 ns
	[Oct25 09:20] overlayfs: idmapped layers are currently not supported
	[Oct25 09:22] overlayfs: idmapped layers are currently not supported
	[Oct25 09:23] overlayfs: idmapped layers are currently not supported
	[Oct25 09:24] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [0bcb1e61aa4954f93376eff6871c63bd7fef85c6400a8063b3d8ccb280fc9dec] <==
	{"level":"warn","ts":"2025-10-25T09:25:46.400603Z","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"warn","ts":"2025-10-25T09:25:46.400697Z","caller":"etcdmain/config.go:270","msg":"--snapshot-count is deprecated in 3.6 and will be decommissioned in 3.7."}
	{"level":"info","ts":"2025-10-25T09:25:46.400711Z","caller":"etcdmain/etcd.go:64","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.85.2:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--feature-gates=InitialCorruptCheck=true","--initial-advertise-peer-urls=https://192.168.85.2:2380","--initial-cluster=pause-993166=https://192.168.85.2:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.85.2:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.85.2:2380","--name=pause-993166","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--watch-progress-notify-interval=5s"]}
	{"level":"info","ts":"2025-10-25T09:25:46.400787Z","caller":"etcdmain/etcd.go:107","msg":"server has already been initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	{"level":"warn","ts":"2025-10-25T09:25:46.400802Z","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2025-10-25T09:25:46.400814Z","caller":"embed/etcd.go:138","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-10-25T09:25:46.400835Z","caller":"embed/etcd.go:544","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-25T09:25:46.401258Z","caller":"embed/etcd.go:146","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"]}
	{"level":"info","ts":"2025-10-25T09:25:46.401365Z","caller":"embed/etcd.go:323","msg":"starting an etcd server","etcd-version":"3.6.4","git-sha":"5400cdc39","go-version":"go1.23.11","go-os":"linux","go-arch":"arm64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"pause-993166","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"experimental-local-address":"","cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-s
tate":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"feature-gates":"InitialCorruptCheck=true","initial-corrupt-check":false,"corrupt-check-time-interval":"0s","compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","discovery-token":"","discovery-endpoints":"","discovery-dial-timeout":"2s","discovery-request-timeout":"5s","discovery-keepalive-time":"2s","discovery-keepalive-timeout":"6s","discovery-insecure-transport":true,"discovery-insecure-skip-tls-verify":false,"discovery-cert":"","discovery-key":"","discovery-cacert":"","discovery-user":"","downgrade-check-interval":"5s","max-learners":1,"v2-deprecation":"write-only"}
	{"level":"info","ts":"2025-10-25T09:25:46.402019Z","logger":"bbolt","caller":"backend/backend.go:203","msg":"Opening db file (/var/lib/minikube/etcd/member/snap/db) with mode -rw------- and with options: {Timeout: 0s, NoGrowSync: false, NoFreelistSync: true, PreLoadFreelist: false, FreelistType: hashmap, ReadOnly: false, MmapFlags: 8000, InitialMmapSize: 10737418240, PageSize: 0, NoSync: false, OpenFile: 0x0, Mlock: false, Logger: 0x40003340e8}"}
	
	
	==> etcd [47fe39f906d8d7285850fca5853bc4537e57f459e57fd794878c97272cbeb938] <==
	{"level":"warn","ts":"2025-10-25T09:27:25.586813Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:27:25.684487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:27:25.721725Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:27:25.787975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:27:25.821359Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:27:25.894626Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:27:25.941033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:27:26.002132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:27:26.037650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:27:26.076704Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:27:26.146725Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:27:26.161415Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:27:26.197071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:27:26.236191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:27:26.266222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:27:26.330263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:27:26.440426Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:27:26.440619Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:27:26.511387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:27:26.544574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:27:26.594970Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:27:26.632537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:27:26.657520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:27:26.678543Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:27:26.922771Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58602","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:27:44 up  1:10,  0 user,  load average: 2.81, 2.46, 2.29
	Linux pause-993166 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0e45c49e84a25010e320a542407973a38a5f8065f7093dd7b3d26e2e6c546c62] <==
	I1025 09:24:54.320974       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 09:24:54.414221       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1025 09:24:54.414359       1 main.go:148] setting mtu 1500 for CNI 
	I1025 09:24:54.414378       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 09:24:54.414390       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T09:24:54Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 09:24:54.524901       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 09:24:54.525021       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 09:24:54.525055       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 09:24:54.525379       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1025 09:25:24.525312       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1025 09:25:24.525315       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1025 09:25:24.525411       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1025 09:25:24.615041       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1025 09:25:26.125575       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 09:25:26.125608       1 metrics.go:72] Registering metrics
	I1025 09:25:26.125683       1 controller.go:711] "Syncing nftables rules"
	I1025 09:25:34.530163       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 09:25:34.530300       1 main.go:301] handling current node
	
	
	==> kindnet [2096c30fd3aa94582633e4db5513e39832fe004de98978844e87c7666828be6d] <==
	E1025 09:25:46.818181       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1025 09:25:47.579111       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1025 09:25:47.752456       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1025 09:25:47.790452       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1025 09:25:48.161381       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1025 09:25:50.201487       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1025 09:25:50.276609       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1025 09:25:50.690518       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1025 09:25:51.021367       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1025 09:25:55.576732       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1025 09:25:55.670679       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1025 09:25:56.540755       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1025 09:25:56.772125       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1025 09:26:03.174322       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1025 09:26:04.729518       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1025 09:26:05.115230       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1025 09:26:07.721792       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1025 09:26:20.153753       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1025 09:26:20.477431       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1025 09:26:24.374928       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1025 09:26:31.093664       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1025 09:26:47.370962       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1025 09:26:50.469821       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1025 09:27:10.596995       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1025 09:27:12.596880       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	
	
	==> kube-apiserver [8c0bc05f35e5cfd23f679343fe56282677202e373ae2cbd191ee4a80dd1cc492] <==
	
	
	==> kube-apiserver [8ffe0ddc38b5cd6b1ec6e998b477c88dfe3de5017eec479555b1a02a271662c6] <==
	I1025 09:27:28.324044       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1025 09:27:28.324061       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1025 09:27:28.324318       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1025 09:27:28.324363       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1025 09:27:28.324443       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1025 09:27:28.324486       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1025 09:27:28.345939       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1025 09:27:28.346117       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1025 09:27:28.347365       1 aggregator.go:171] initial CRD sync complete...
	I1025 09:27:28.347391       1 autoregister_controller.go:144] Starting autoregister controller
	I1025 09:27:28.347397       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 09:27:28.347404       1 cache.go:39] Caches are synced for autoregister controller
	I1025 09:27:28.376800       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:27:28.417084       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 09:27:28.422509       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 09:27:28.422751       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1025 09:27:28.475440       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1025 09:27:28.506323       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1025 09:27:28.925326       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 09:27:30.194881       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 09:27:31.833746       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 09:27:31.860597       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1025 09:27:31.908060       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 09:27:31.975946       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 09:27:32.060187       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [69577785cdf0018c99ee3138be8b8466664873956fae9df607ab4b9f0211856b] <==
	
	
	==> kube-controller-manager [714664c95ab9237844e38f812333c61dd85473b8f3a9fe85af446cf295917418] <==
	I1025 09:27:31.711484       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1025 09:27:31.711592       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1025 09:27:31.711606       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1025 09:27:31.711615       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1025 09:27:31.711625       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1025 09:27:31.711634       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1025 09:27:31.711645       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1025 09:27:31.711671       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:27:31.713427       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1025 09:27:31.713673       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1025 09:27:31.724141       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 09:27:31.724812       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1025 09:27:31.728639       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1025 09:27:31.732823       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1025 09:27:31.733598       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1025 09:27:31.742798       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1025 09:27:31.745925       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1025 09:27:31.750363       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1025 09:27:31.750477       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1025 09:27:31.750835       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1025 09:27:31.750895       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1025 09:27:31.750911       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 09:27:31.757084       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1025 09:27:31.762621       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:27:41.695185       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [1f115f72025a0c69095b7e23981b889c5f6b849f9233c4dc87b8320007c8dc3a] <==
	I1025 09:24:54.294871       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:24:54.375955       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:24:54.477517       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:24:54.477633       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1025 09:24:54.477747       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:24:54.495902       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:24:54.495962       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:24:54.499894       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:24:54.500202       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:24:54.500225       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:24:54.501896       1 config.go:200] "Starting service config controller"
	I1025 09:24:54.501970       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:24:54.502171       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:24:54.502200       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:24:54.502236       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:24:54.502261       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:24:54.503220       1 config.go:309] "Starting node config controller"
	I1025 09:24:54.504535       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:24:54.504828       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:24:54.602286       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 09:24:54.602372       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 09:24:54.602615       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [2ca61daa9f640085335cff12d469a97aac33ed0ac86bb44265dd873b1b88ea7b] <==
	I1025 09:27:27.199426       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:27:27.331774       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:27:28.479033       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:27:28.479136       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1025 09:27:28.479241       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:27:28.847704       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:27:28.848988       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:27:28.889380       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:27:28.889850       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:27:28.890284       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:27:28.900173       1 config.go:200] "Starting service config controller"
	I1025 09:27:28.900212       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:27:28.900239       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:27:28.900248       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:27:28.900381       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:27:28.900394       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:27:28.913058       1 config.go:309] "Starting node config controller"
	I1025 09:27:28.913155       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:27:29.004897       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 09:27:29.005050       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 09:27:29.005379       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 09:27:29.013936       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [886db0afca69285c74060f184d80584ce967a64cc5c10a575ddd4e8bee524b4c] <==
	I1025 09:27:23.323419       1 serving.go:386] Generated self-signed cert in-memory
	W1025 09:27:28.258225       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 09:27:28.258333       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 09:27:28.258366       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 09:27:28.258415       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 09:27:28.434130       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 09:27:28.434230       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:27:28.442648       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:27:28.442947       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:27:28.442913       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 09:27:28.442892       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 09:27:28.552419       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [f33ce8d9cda081620ca6bb2e65c2a49aa70fd1d8d5e3fe5766fdce8e06ebedba] <==
	
	
	==> kube-scheduler [ffc064adb3057dcbcb7e698a5374601d1a883faa933a2d0a24564611c6950319] <==
	I1025 09:24:43.660474       1 serving.go:386] Generated self-signed cert in-memory
	W1025 09:24:46.474601       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 09:24:46.474707       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 09:24:46.475237       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 09:24:46.475304       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 09:24:46.540666       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 09:24:46.540758       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:24:46.548571       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 09:24:46.548682       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:24:46.557739       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:24:46.548756       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1025 09:24:46.594433       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1025 09:24:47.458580       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:25:39.481280       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1025 09:25:39.481458       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1025 09:25:39.481472       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1025 09:25:39.481545       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:25:39.481572       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1025 09:25:39.481619       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 25 09:27:20 pause-993166 kubelet[1302]: E1025 09:27:20.429673    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-993166\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="2313188a20123f8b90114e44acf6d422" pod="kube-system/kube-controller-manager-pause-993166"
	Oct 25 09:27:20 pause-993166 kubelet[1302]: E1025 09:27:20.429964    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-993166\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="2313188a20123f8b90114e44acf6d422" pod="kube-system/kube-controller-manager-pause-993166"
	Oct 25 09:27:20 pause-993166 kubelet[1302]: E1025 09:27:20.431096    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5rlkq\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="18988bb4-0b22-4e84-99ed-c40fd8525128" pod="kube-system/kube-proxy-5rlkq"
	Oct 25 09:27:20 pause-993166 kubelet[1302]: E1025 09:27:20.432418    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-f8dj2\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="0a881c6a-52f1-4a77-a887-6a1f589c8605" pod="kube-system/kindnet-f8dj2"
	Oct 25 09:27:20 pause-993166 kubelet[1302]: E1025 09:27:20.432711    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-jwhsz\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="4f925e04-50bc-46af-9c96-c0ec0fb36a26" pod="kube-system/coredns-66bc5c9577-jwhsz"
	Oct 25 09:27:20 pause-993166 kubelet[1302]: E1025 09:27:20.432931    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-993166\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="eea85baa70a68cd406c509e5652fe75c" pod="kube-system/etcd-pause-993166"
	Oct 25 09:27:20 pause-993166 kubelet[1302]: E1025 09:27:20.433176    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-993166\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="b46b3bb615275f687eb03acd77879e8a" pod="kube-system/kube-scheduler-pause-993166"
	Oct 25 09:27:20 pause-993166 kubelet[1302]: E1025 09:27:20.433451    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-993166\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="b9fc1ab0a251d338e1d857eab8b4410f" pod="kube-system/kube-apiserver-pause-993166"
	Oct 25 09:27:21 pause-993166 kubelet[1302]: I1025 09:27:21.210950    1302 setters.go:543] "Node became not ready" node="pause-993166" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-25T09:27:21Z","lastTransitionTime":"2025-10-25T09:27:21Z","reason":"KubeletNotReady","message":"container runtime is down"}
	Oct 25 09:27:25 pause-993166 kubelet[1302]: I1025 09:27:25.229439    1302 scope.go:117] "RemoveContainer" containerID="87c5a3facf18119b6c304958a3ed79367860f43b651be0b4a9b1c500597d43cd"
	Oct 25 09:27:25 pause-993166 kubelet[1302]: I1025 09:27:25.236899    1302 status_manager.go:507] "Container startup changed for unknown container" pod="kube-system/kube-apiserver-pause-993166" containerID="cri-o://8c0bc05f35e5cfd23f679343fe56282677202e373ae2cbd191ee4a80dd1cc492"
	Oct 25 09:27:25 pause-993166 kubelet[1302]: I1025 09:27:25.244278    1302 status_manager.go:507] "Container startup changed for unknown container" pod="kube-system/etcd-pause-993166" containerID="cri-o://0bcb1e61aa4954f93376eff6871c63bd7fef85c6400a8063b3d8ccb280fc9dec"
	Oct 25 09:27:25 pause-993166 kubelet[1302]: I1025 09:27:25.244550    1302 status_manager.go:444] "Container readiness changed for unknown container" pod="kube-system/etcd-pause-993166" containerID="cri-o://0bcb1e61aa4954f93376eff6871c63bd7fef85c6400a8063b3d8ccb280fc9dec"
	Oct 25 09:27:25 pause-993166 kubelet[1302]: I1025 09:27:25.244692    1302 status_manager.go:444] "Container readiness changed for unknown container" pod="kube-system/kube-apiserver-pause-993166" containerID="cri-o://8c0bc05f35e5cfd23f679343fe56282677202e373ae2cbd191ee4a80dd1cc492"
	Oct 25 09:27:25 pause-993166 kubelet[1302]: I1025 09:27:25.260994    1302 scope.go:117] "RemoveContainer" containerID="f563b86847be715f70af152a0fbab4b59318f54cbe306e06323d0f39adeec6dd"
	Oct 25 09:27:25 pause-993166 kubelet[1302]: I1025 09:27:25.306231    1302 scope.go:117] "RemoveContainer" containerID="169848acc3e0cecb70686ebdfdf92cf360efc3648f67252350f18a44c7a011cd"
	Oct 25 09:27:28 pause-993166 kubelet[1302]: E1025 09:27:28.053376    1302 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-jwhsz\" is forbidden: User \"system:node:pause-993166\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-993166' and this object" podUID="4f925e04-50bc-46af-9c96-c0ec0fb36a26" pod="kube-system/coredns-66bc5c9577-jwhsz"
	Oct 25 09:27:28 pause-993166 kubelet[1302]: E1025 09:27:28.059345    1302 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-993166\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-993166' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Oct 25 09:27:28 pause-993166 kubelet[1302]: E1025 09:27:28.140728    1302 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-993166\" is forbidden: User \"system:node:pause-993166\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-993166' and this object" podUID="eea85baa70a68cd406c509e5652fe75c" pod="kube-system/etcd-pause-993166"
	Oct 25 09:27:28 pause-993166 kubelet[1302]: W1025 09:27:28.195498    1302 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Oct 25 09:27:28 pause-993166 kubelet[1302]: E1025 09:27:28.250898    1302 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-993166\" is forbidden: User \"system:node:pause-993166\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-993166' and this object" podUID="b46b3bb615275f687eb03acd77879e8a" pod="kube-system/kube-scheduler-pause-993166"
	Oct 25 09:27:38 pause-993166 kubelet[1302]: W1025 09:27:38.229975    1302 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Oct 25 09:27:41 pause-993166 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 09:27:41 pause-993166 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 09:27:41 pause-993166 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-993166 -n pause-993166
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-993166 -n pause-993166: exit status 2 (395.838618ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-993166 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-993166
helpers_test.go:243: (dbg) docker inspect pause-993166:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7bccdbee8081e7e37c336e2d18e06b3364513944a6f5188fe19f1a3e39b4ceba",
	        "Created": "2025-10-25T09:24:19.396299953Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 160092,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:24:19.475634382Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/7bccdbee8081e7e37c336e2d18e06b3364513944a6f5188fe19f1a3e39b4ceba/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7bccdbee8081e7e37c336e2d18e06b3364513944a6f5188fe19f1a3e39b4ceba/hostname",
	        "HostsPath": "/var/lib/docker/containers/7bccdbee8081e7e37c336e2d18e06b3364513944a6f5188fe19f1a3e39b4ceba/hosts",
	        "LogPath": "/var/lib/docker/containers/7bccdbee8081e7e37c336e2d18e06b3364513944a6f5188fe19f1a3e39b4ceba/7bccdbee8081e7e37c336e2d18e06b3364513944a6f5188fe19f1a3e39b4ceba-json.log",
	        "Name": "/pause-993166",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-993166:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-993166",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7bccdbee8081e7e37c336e2d18e06b3364513944a6f5188fe19f1a3e39b4ceba",
	                "LowerDir": "/var/lib/docker/overlay2/8eccda200bd725632d5da0950b13432c512c56fd35b095d200ec60674c53a01f-init/diff:/var/lib/docker/overlay2/cef79fc16ca3e257f9c9390d34fae091a7f7fa219913b2606caf1922ea50ed93/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8eccda200bd725632d5da0950b13432c512c56fd35b095d200ec60674c53a01f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8eccda200bd725632d5da0950b13432c512c56fd35b095d200ec60674c53a01f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8eccda200bd725632d5da0950b13432c512c56fd35b095d200ec60674c53a01f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-993166",
	                "Source": "/var/lib/docker/volumes/pause-993166/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-993166",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-993166",
	                "name.minikube.sigs.k8s.io": "pause-993166",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "804c3cbdbf052b79f715b608308eb0293071fb77e39619d53ea16bce1a97767e",
	            "SandboxKey": "/var/run/docker/netns/804c3cbdbf05",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33023"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33024"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33027"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33025"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33026"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-993166": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:a2:2a:b6:4a:f5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "55cf5daa39d0de1da2c11e539b397ddf464a4874422d2addedf0dde2d66322ac",
	                    "EndpointID": "dbf238f9be11e8d1aa73fad1e689c8d56e14a1ca7d0b0108027d7ac16643fdae",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-993166",
	                        "7bccdbee8081"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-993166 -n pause-993166
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-993166 -n pause-993166: exit status 2 (347.838571ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-993166 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-993166 logs -n 25: (1.533863244s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-693294 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-693294       │ jenkins │ v1.37.0 │ 25 Oct 25 09:20 UTC │ 25 Oct 25 09:21 UTC │
	│ start   │ -p missing-upgrade-334875 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-334875    │ jenkins │ v1.37.0 │ 25 Oct 25 09:21 UTC │ 25 Oct 25 09:22 UTC │
	│ delete  │ -p NoKubernetes-693294                                                                                                                   │ NoKubernetes-693294       │ jenkins │ v1.37.0 │ 25 Oct 25 09:21 UTC │ 25 Oct 25 09:21 UTC │
	│ start   │ -p NoKubernetes-693294 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-693294       │ jenkins │ v1.37.0 │ 25 Oct 25 09:21 UTC │ 25 Oct 25 09:21 UTC │
	│ ssh     │ -p NoKubernetes-693294 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-693294       │ jenkins │ v1.37.0 │ 25 Oct 25 09:21 UTC │                     │
	│ stop    │ -p NoKubernetes-693294                                                                                                                   │ NoKubernetes-693294       │ jenkins │ v1.37.0 │ 25 Oct 25 09:21 UTC │ 25 Oct 25 09:21 UTC │
	│ start   │ -p NoKubernetes-693294 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-693294       │ jenkins │ v1.37.0 │ 25 Oct 25 09:21 UTC │ 25 Oct 25 09:22 UTC │
	│ ssh     │ -p NoKubernetes-693294 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-693294       │ jenkins │ v1.37.0 │ 25 Oct 25 09:22 UTC │                     │
	│ delete  │ -p NoKubernetes-693294                                                                                                                   │ NoKubernetes-693294       │ jenkins │ v1.37.0 │ 25 Oct 25 09:22 UTC │ 25 Oct 25 09:22 UTC │
	│ start   │ -p kubernetes-upgrade-707917 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-707917 │ jenkins │ v1.37.0 │ 25 Oct 25 09:22 UTC │ 25 Oct 25 09:22 UTC │
	│ delete  │ -p missing-upgrade-334875                                                                                                                │ missing-upgrade-334875    │ jenkins │ v1.37.0 │ 25 Oct 25 09:22 UTC │ 25 Oct 25 09:22 UTC │
	│ start   │ -p stopped-upgrade-971794 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-971794    │ jenkins │ v1.32.0 │ 25 Oct 25 09:22 UTC │ 25 Oct 25 09:22 UTC │
	│ stop    │ stopped-upgrade-971794 stop                                                                                                              │ stopped-upgrade-971794    │ jenkins │ v1.32.0 │ 25 Oct 25 09:22 UTC │ 25 Oct 25 09:22 UTC │
	│ stop    │ -p kubernetes-upgrade-707917                                                                                                             │ kubernetes-upgrade-707917 │ jenkins │ v1.37.0 │ 25 Oct 25 09:22 UTC │ 25 Oct 25 09:22 UTC │
	│ start   │ -p stopped-upgrade-971794 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-971794    │ jenkins │ v1.37.0 │ 25 Oct 25 09:22 UTC │ 25 Oct 25 09:23 UTC │
	│ start   │ -p kubernetes-upgrade-707917 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-707917 │ jenkins │ v1.37.0 │ 25 Oct 25 09:22 UTC │ 25 Oct 25 09:27 UTC │
	│ delete  │ -p stopped-upgrade-971794                                                                                                                │ stopped-upgrade-971794    │ jenkins │ v1.37.0 │ 25 Oct 25 09:23 UTC │ 25 Oct 25 09:23 UTC │
	│ start   │ -p running-upgrade-826823 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-826823    │ jenkins │ v1.32.0 │ 25 Oct 25 09:23 UTC │ 25 Oct 25 09:23 UTC │
	│ start   │ -p running-upgrade-826823 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-826823    │ jenkins │ v1.37.0 │ 25 Oct 25 09:23 UTC │ 25 Oct 25 09:24 UTC │
	│ delete  │ -p running-upgrade-826823                                                                                                                │ running-upgrade-826823    │ jenkins │ v1.37.0 │ 25 Oct 25 09:24 UTC │ 25 Oct 25 09:24 UTC │
	│ start   │ -p pause-993166 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-993166              │ jenkins │ v1.37.0 │ 25 Oct 25 09:24 UTC │ 25 Oct 25 09:25 UTC │
	│ start   │ -p pause-993166 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-993166              │ jenkins │ v1.37.0 │ 25 Oct 25 09:25 UTC │ 25 Oct 25 09:27 UTC │
	│ start   │ -p kubernetes-upgrade-707917 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                        │ kubernetes-upgrade-707917 │ jenkins │ v1.37.0 │ 25 Oct 25 09:27 UTC │                     │
	│ start   │ -p kubernetes-upgrade-707917 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-707917 │ jenkins │ v1.37.0 │ 25 Oct 25 09:27 UTC │                     │
	│ pause   │ -p pause-993166 --alsologtostderr -v=5                                                                                                   │ pause-993166              │ jenkins │ v1.37.0 │ 25 Oct 25 09:27 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:27:32
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:27:32.891590  168764 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:27:32.891757  168764 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:27:32.891785  168764 out.go:374] Setting ErrFile to fd 2...
	I1025 09:27:32.891806  168764 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:27:32.892065  168764 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-2312/.minikube/bin
	I1025 09:27:32.892455  168764 out.go:368] Setting JSON to false
	I1025 09:27:32.894107  168764 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4204,"bootTime":1761380249,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1025 09:27:32.894204  168764 start.go:141] virtualization:  
	I1025 09:27:32.898169  168764 out.go:179] * [kubernetes-upgrade-707917] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 09:27:32.902464  168764 notify.go:220] Checking for updates...
	I1025 09:27:32.903408  168764 out.go:179]   - MINIKUBE_LOCATION=21796
	I1025 09:27:32.906968  168764 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:27:32.909815  168764 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21796-2312/kubeconfig
	I1025 09:27:32.912571  168764 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-2312/.minikube
	I1025 09:27:32.915424  168764 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 09:27:32.918262  168764 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:27:32.921496  168764 config.go:182] Loaded profile config "kubernetes-upgrade-707917": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:27:32.922170  168764 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:27:32.959604  168764 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 09:27:32.959718  168764 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:27:33.030928  168764 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-25 09:27:33.021135037 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:27:33.031072  168764 docker.go:318] overlay module found
	I1025 09:27:33.034413  168764 out.go:179] * Using the docker driver based on existing profile
	I1025 09:27:33.037271  168764 start.go:305] selected driver: docker
	I1025 09:27:33.037292  168764 start.go:925] validating driver "docker" against &{Name:kubernetes-upgrade-707917 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-707917 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:27:33.037398  168764 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:27:33.038197  168764 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:27:33.096412  168764 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-25 09:27:33.086555858 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:27:33.096742  168764 cni.go:84] Creating CNI manager for ""
	I1025 09:27:33.096805  168764 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:27:33.096848  168764 start.go:349] cluster config:
	{Name:kubernetes-upgrade-707917 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-707917 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgen
tPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:27:33.100376  168764 out.go:179] * Starting "kubernetes-upgrade-707917" primary control-plane node in "kubernetes-upgrade-707917" cluster
	I1025 09:27:33.103514  168764 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 09:27:33.106582  168764 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:27:28.252043  164447 node_ready.go:49] node "pause-993166" is "Ready"
	I1025 09:27:28.252069  164447 node_ready.go:38] duration metric: took 9.880954173s for node "pause-993166" to be "Ready" ...
	I1025 09:27:28.252083  164447 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:27:28.252139  164447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:27:28.288361  164447 api_server.go:72] duration metric: took 10.120121128s to wait for apiserver process to appear ...
	I1025 09:27:28.288383  164447 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:27:28.288403  164447 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:27:28.446874  164447 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 09:27:28.446959  164447 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 09:27:28.789232  164447 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:27:28.821295  164447 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 09:27:28.821339  164447 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 09:27:29.288896  164447 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:27:29.311705  164447 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 09:27:29.311742  164447 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 09:27:29.789364  164447 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:27:29.797538  164447 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1025 09:27:29.798602  164447 api_server.go:141] control plane version: v1.34.1
	I1025 09:27:29.798627  164447 api_server.go:131] duration metric: took 1.510236323s to wait for apiserver health ...
	I1025 09:27:29.798636  164447 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 09:27:29.801883  164447 system_pods.go:59] 7 kube-system pods found
	I1025 09:27:29.801914  164447 system_pods.go:61] "coredns-66bc5c9577-jwhsz" [4f925e04-50bc-46af-9c96-c0ec0fb36a26] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:27:29.801923  164447 system_pods.go:61] "etcd-pause-993166" [60b9d7a1-fe2b-48f9-8a1c-fdb00fb49a7c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 09:27:29.801931  164447 system_pods.go:61] "kindnet-f8dj2" [0a881c6a-52f1-4a77-a887-6a1f589c8605] Running
	I1025 09:27:29.801940  164447 system_pods.go:61] "kube-apiserver-pause-993166" [a6ee0d1f-983c-4e73-a86f-b2d267ffd56f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 09:27:29.801952  164447 system_pods.go:61] "kube-controller-manager-pause-993166" [d6e20edc-31fe-453a-8bf0-d72e17fd0bda] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 09:27:29.801957  164447 system_pods.go:61] "kube-proxy-5rlkq" [18988bb4-0b22-4e84-99ed-c40fd8525128] Running
	I1025 09:27:29.801967  164447 system_pods.go:61] "kube-scheduler-pause-993166" [3afb80c3-fb10-43a9-b671-8c849cbd9786] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 09:27:29.802008  164447 system_pods.go:74] duration metric: took 3.365951ms to wait for pod list to return data ...
	I1025 09:27:29.802017  164447 default_sa.go:34] waiting for default service account to be created ...
	I1025 09:27:29.804486  164447 default_sa.go:45] found service account: "default"
	I1025 09:27:29.804508  164447 default_sa.go:55] duration metric: took 2.483063ms for default service account to be created ...
	I1025 09:27:29.804518  164447 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 09:27:29.807179  164447 system_pods.go:86] 7 kube-system pods found
	I1025 09:27:29.807208  164447 system_pods.go:89] "coredns-66bc5c9577-jwhsz" [4f925e04-50bc-46af-9c96-c0ec0fb36a26] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:27:29.807218  164447 system_pods.go:89] "etcd-pause-993166" [60b9d7a1-fe2b-48f9-8a1c-fdb00fb49a7c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 09:27:29.807224  164447 system_pods.go:89] "kindnet-f8dj2" [0a881c6a-52f1-4a77-a887-6a1f589c8605] Running
	I1025 09:27:29.807230  164447 system_pods.go:89] "kube-apiserver-pause-993166" [a6ee0d1f-983c-4e73-a86f-b2d267ffd56f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 09:27:29.807243  164447 system_pods.go:89] "kube-controller-manager-pause-993166" [d6e20edc-31fe-453a-8bf0-d72e17fd0bda] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 09:27:29.807250  164447 system_pods.go:89] "kube-proxy-5rlkq" [18988bb4-0b22-4e84-99ed-c40fd8525128] Running
	I1025 09:27:29.807256  164447 system_pods.go:89] "kube-scheduler-pause-993166" [3afb80c3-fb10-43a9-b671-8c849cbd9786] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 09:27:29.807266  164447 system_pods.go:126] duration metric: took 2.743146ms to wait for k8s-apps to be running ...
	I1025 09:27:29.807278  164447 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 09:27:29.807334  164447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:27:29.824001  164447 system_svc.go:56] duration metric: took 16.714542ms WaitForService to wait for kubelet
	I1025 09:27:29.824033  164447 kubeadm.go:586] duration metric: took 11.655797817s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:27:29.824054  164447 node_conditions.go:102] verifying NodePressure condition ...
	I1025 09:27:29.826792  164447 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 09:27:29.826832  164447 node_conditions.go:123] node cpu capacity is 2
	I1025 09:27:29.826844  164447 node_conditions.go:105] duration metric: took 2.784336ms to run NodePressure ...
	I1025 09:27:29.826857  164447 start.go:241] waiting for startup goroutines ...
	I1025 09:27:29.826866  164447 start.go:246] waiting for cluster config update ...
	I1025 09:27:29.826875  164447 start.go:255] writing updated cluster config ...
	I1025 09:27:29.827166  164447 ssh_runner.go:195] Run: rm -f paused
	I1025 09:27:29.830492  164447 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:27:29.831077  164447 kapi.go:59] client config for pause-993166: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21796-2312/.minikube/profiles/pause-993166/client.crt", KeyFile:"/home/jenkins/minikube-integration/21796-2312/.minikube/profiles/pause-993166/client.key", CAFile:"/home/jenkins/minikube-integration/21796-2312/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 09:27:29.835594  164447 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jwhsz" in "kube-system" namespace to be "Ready" or be gone ...
	W1025 09:27:31.849341  164447 pod_ready.go:104] pod "coredns-66bc5c9577-jwhsz" is not "Ready", error: node "pause-993166" hosting pod "coredns-66bc5c9577-jwhsz" is not "Ready" (will retry)
	I1025 09:27:33.109569  168764 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:27:33.109716  168764 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:27:33.109756  168764 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21796-2312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1025 09:27:33.109769  168764 cache.go:58] Caching tarball of preloaded images
	I1025 09:27:33.109840  168764 preload.go:233] Found /home/jenkins/minikube-integration/21796-2312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 09:27:33.109854  168764 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 09:27:33.109956  168764 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/kubernetes-upgrade-707917/config.json ...
	I1025 09:27:33.131327  168764 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 09:27:33.131350  168764 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 09:27:33.131368  168764 cache.go:232] Successfully downloaded all kic artifacts
	I1025 09:27:33.131390  168764 start.go:360] acquireMachinesLock for kubernetes-upgrade-707917: {Name:mkab74429f78f38bcfb1582561347a903c5ed810 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:27:33.131458  168764 start.go:364] duration metric: took 43.98µs to acquireMachinesLock for "kubernetes-upgrade-707917"
	I1025 09:27:33.131481  168764 start.go:96] Skipping create...Using existing machine configuration
	I1025 09:27:33.131489  168764 fix.go:54] fixHost starting: 
	I1025 09:27:33.131758  168764 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-707917 --format={{.State.Status}}
	I1025 09:27:33.151523  168764 fix.go:112] recreateIfNeeded on kubernetes-upgrade-707917: state=Running err=<nil>
	W1025 09:27:33.151573  168764 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 09:27:33.156795  168764 out.go:252] * Updating the running docker "kubernetes-upgrade-707917" container ...
	I1025 09:27:33.156832  168764 machine.go:93] provisionDockerMachine start ...
	I1025 09:27:33.156925  168764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-707917
	I1025 09:27:33.174508  168764 main.go:141] libmachine: Using SSH client type: native
	I1025 09:27:33.174835  168764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33013 <nil> <nil>}
	I1025 09:27:33.174850  168764 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:27:33.325544  168764 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-707917
	
	I1025 09:27:33.325574  168764 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-707917"
	I1025 09:27:33.325640  168764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-707917
	I1025 09:27:33.346341  168764 main.go:141] libmachine: Using SSH client type: native
	I1025 09:27:33.346664  168764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33013 <nil> <nil>}
	I1025 09:27:33.346683  168764 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-707917 && echo "kubernetes-upgrade-707917" | sudo tee /etc/hostname
	I1025 09:27:33.529099  168764 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-707917
	
	I1025 09:27:33.529271  168764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-707917
	I1025 09:27:33.548216  168764 main.go:141] libmachine: Using SSH client type: native
	I1025 09:27:33.548535  168764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33013 <nil> <nil>}
	I1025 09:27:33.548555  168764 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-707917' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-707917/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-707917' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:27:33.706378  168764 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:27:33.706400  168764 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21796-2312/.minikube CaCertPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21796-2312/.minikube}
	I1025 09:27:33.706426  168764 ubuntu.go:190] setting up certificates
	I1025 09:27:33.706436  168764 provision.go:84] configureAuth start
	I1025 09:27:33.706516  168764 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-707917
	I1025 09:27:33.725361  168764 provision.go:143] copyHostCerts
	I1025 09:27:33.725435  168764 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-2312/.minikube/ca.pem, removing ...
	I1025 09:27:33.725453  168764 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-2312/.minikube/ca.pem
	I1025 09:27:33.725536  168764 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21796-2312/.minikube/ca.pem (1082 bytes)
	I1025 09:27:33.725651  168764 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-2312/.minikube/cert.pem, removing ...
	I1025 09:27:33.725662  168764 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-2312/.minikube/cert.pem
	I1025 09:27:33.725690  168764 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21796-2312/.minikube/cert.pem (1123 bytes)
	I1025 09:27:33.725757  168764 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-2312/.minikube/key.pem, removing ...
	I1025 09:27:33.725765  168764 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-2312/.minikube/key.pem
	I1025 09:27:33.725791  168764 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21796-2312/.minikube/key.pem (1679 bytes)
	I1025 09:27:33.725851  168764 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21796-2312/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-707917 san=[127.0.0.1 192.168.76.2 kubernetes-upgrade-707917 localhost minikube]
	I1025 09:27:34.801296  168764 provision.go:177] copyRemoteCerts
	I1025 09:27:34.801362  168764 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:27:34.801404  168764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-707917
	I1025 09:27:34.819152  168764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/kubernetes-upgrade-707917/id_rsa Username:docker}
	I1025 09:27:34.930880  168764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1025 09:27:34.967702  168764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 09:27:34.997067  168764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 09:27:35.031425  168764 provision.go:87] duration metric: took 1.324965911s to configureAuth
	I1025 09:27:35.031452  168764 ubuntu.go:206] setting minikube options for container-runtime
	I1025 09:27:35.031638  168764 config.go:182] Loaded profile config "kubernetes-upgrade-707917": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:27:35.031753  168764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-707917
	I1025 09:27:35.051684  168764 main.go:141] libmachine: Using SSH client type: native
	I1025 09:27:35.052004  168764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33013 <nil> <nil>}
	I1025 09:27:35.052019  168764 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 09:27:35.738732  168764 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:27:35.738814  168764 machine.go:96] duration metric: took 2.581973416s to provisionDockerMachine
	I1025 09:27:35.738840  168764 start.go:293] postStartSetup for "kubernetes-upgrade-707917" (driver="docker")
	I1025 09:27:35.738882  168764 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:27:35.738972  168764 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:27:35.739050  168764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-707917
	I1025 09:27:35.757072  168764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/kubernetes-upgrade-707917/id_rsa Username:docker}
	I1025 09:27:35.880985  168764 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:27:35.888831  168764 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 09:27:35.888857  168764 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 09:27:35.888869  168764 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-2312/.minikube/addons for local assets ...
	I1025 09:27:35.888931  168764 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-2312/.minikube/files for local assets ...
	I1025 09:27:35.889016  168764 filesync.go:149] local asset: /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem -> 41102.pem in /etc/ssl/certs
	I1025 09:27:35.889124  168764 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 09:27:35.908103  168764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem --> /etc/ssl/certs/41102.pem (1708 bytes)
	I1025 09:27:35.933642  168764 start.go:296] duration metric: took 194.757044ms for postStartSetup
	I1025 09:27:35.933722  168764 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:27:35.933795  168764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-707917
	I1025 09:27:35.953815  168764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/kubernetes-upgrade-707917/id_rsa Username:docker}
	I1025 09:27:36.072059  168764 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 09:27:36.077281  168764 fix.go:56] duration metric: took 2.945784566s for fixHost
	I1025 09:27:36.077307  168764 start.go:83] releasing machines lock for "kubernetes-upgrade-707917", held for 2.945837867s
	I1025 09:27:36.077377  168764 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-707917
	I1025 09:27:36.095960  168764 ssh_runner.go:195] Run: cat /version.json
	I1025 09:27:36.096023  168764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-707917
	I1025 09:27:36.096272  168764 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:27:36.096325  168764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-707917
	I1025 09:27:36.125902  168764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/kubernetes-upgrade-707917/id_rsa Username:docker}
	I1025 09:27:36.128657  168764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/kubernetes-upgrade-707917/id_rsa Username:docker}
	I1025 09:27:36.230153  168764 ssh_runner.go:195] Run: systemctl --version
	I1025 09:27:36.324455  168764 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:27:36.380993  168764 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:27:36.386053  168764 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:27:36.386129  168764 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:27:36.394590  168764 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 09:27:36.394614  168764 start.go:495] detecting cgroup driver to use...
	I1025 09:27:36.394675  168764 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 09:27:36.394736  168764 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:27:36.411215  168764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:27:36.424808  168764 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:27:36.424871  168764 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:27:36.441217  168764 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:27:36.454509  168764 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:27:36.592165  168764 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:27:36.809240  168764 docker.go:234] disabling docker service ...
	I1025 09:27:36.809354  168764 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:27:36.827909  168764 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:27:36.861034  168764 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:27:37.099844  168764 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:27:37.365477  168764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:27:37.380847  168764 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:27:37.415537  168764 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 09:27:37.415659  168764 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:27:37.428454  168764 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 09:27:37.428606  168764 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:27:37.441827  168764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:27:37.453214  168764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:27:37.471430  168764 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:27:37.483258  168764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:27:37.500465  168764 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:27:37.508958  168764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:27:37.518116  168764 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:27:37.532416  168764 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:27:37.542055  168764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:27:37.767533  168764 ssh_runner.go:195] Run: sudo systemctl restart crio
	W1025 09:27:34.354072  164447 pod_ready.go:104] pod "coredns-66bc5c9577-jwhsz" is not "Ready", error: node "pause-993166" hosting pod "coredns-66bc5c9577-jwhsz" is not "Ready" (will retry)
	W1025 09:27:36.842581  164447 pod_ready.go:104] pod "coredns-66bc5c9577-jwhsz" is not "Ready", error: node "pause-993166" hosting pod "coredns-66bc5c9577-jwhsz" is not "Ready" (will retry)
	I1025 09:27:38.030416  168764 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:27:38.030530  168764 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:27:38.034746  168764 start.go:563] Will wait 60s for crictl version
	I1025 09:27:38.034867  168764 ssh_runner.go:195] Run: which crictl
	I1025 09:27:38.039303  168764 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 09:27:38.072031  168764 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 09:27:38.072116  168764 ssh_runner.go:195] Run: crio --version
	I1025 09:27:38.104902  168764 ssh_runner.go:195] Run: crio --version
	I1025 09:27:38.138366  168764 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 09:27:38.141396  168764 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-707917 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:27:38.158375  168764 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1025 09:27:38.162448  168764 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-707917 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-707917 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 09:27:38.162569  168764 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:27:38.162623  168764 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:27:38.197058  168764 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:27:38.197081  168764 crio.go:433] Images already preloaded, skipping extraction
	I1025 09:27:38.197138  168764 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:27:38.224479  168764 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:27:38.224504  168764 cache_images.go:85] Images are preloaded, skipping loading
	I1025 09:27:38.224511  168764 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1025 09:27:38.224624  168764 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kubernetes-upgrade-707917 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-707917 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 09:27:38.224708  168764 ssh_runner.go:195] Run: crio config
	I1025 09:27:38.290926  168764 cni.go:84] Creating CNI manager for ""
	I1025 09:27:38.290950  168764 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:27:38.290967  168764 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 09:27:38.290994  168764 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-707917 NodeName:kubernetes-upgrade-707917 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:27:38.291125  168764 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-707917"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:27:38.291201  168764 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 09:27:38.298840  168764 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 09:27:38.298928  168764 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:27:38.306236  168764 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1025 09:27:38.319058  168764 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:27:38.332369  168764 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1025 09:27:38.347649  168764 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1025 09:27:38.351285  168764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:27:38.487365  168764 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:27:38.500831  168764 certs.go:69] Setting up /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/kubernetes-upgrade-707917 for IP: 192.168.76.2
	I1025 09:27:38.500857  168764 certs.go:195] generating shared ca certs ...
	I1025 09:27:38.500873  168764 certs.go:227] acquiring lock for ca certs: {Name:mke81a91ad41248e67f7acba9fb2e71e1c110e7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:27:38.501050  168764 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21796-2312/.minikube/ca.key
	I1025 09:27:38.501117  168764 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.key
	I1025 09:27:38.501132  168764 certs.go:257] generating profile certs ...
	I1025 09:27:38.501238  168764 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/kubernetes-upgrade-707917/client.key
	I1025 09:27:38.501317  168764 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/kubernetes-upgrade-707917/apiserver.key.b64a597d
	I1025 09:27:38.501385  168764 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/kubernetes-upgrade-707917/proxy-client.key
	I1025 09:27:38.501530  168764 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/4110.pem (1338 bytes)
	W1025 09:27:38.501577  168764 certs.go:480] ignoring /home/jenkins/minikube-integration/21796-2312/.minikube/certs/4110_empty.pem, impossibly tiny 0 bytes
	I1025 09:27:38.501593  168764 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 09:27:38.501623  168764 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem (1082 bytes)
	I1025 09:27:38.501684  168764 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:27:38.501719  168764 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/key.pem (1679 bytes)
	I1025 09:27:38.501794  168764 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem (1708 bytes)
	I1025 09:27:38.502463  168764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:27:38.524933  168764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 09:27:38.543701  168764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:27:38.561569  168764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 09:27:38.584306  168764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/kubernetes-upgrade-707917/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1025 09:27:38.604137  168764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/kubernetes-upgrade-707917/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 09:27:38.622933  168764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/kubernetes-upgrade-707917/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:27:38.641931  168764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/kubernetes-upgrade-707917/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 09:27:38.665568  168764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem --> /usr/share/ca-certificates/41102.pem (1708 bytes)
	I1025 09:27:38.688722  168764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:27:38.710736  168764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/certs/4110.pem --> /usr/share/ca-certificates/4110.pem (1338 bytes)
	I1025 09:27:38.732601  168764 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:27:38.745883  168764 ssh_runner.go:195] Run: openssl version
	I1025 09:27:38.752566  168764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41102.pem && ln -fs /usr/share/ca-certificates/41102.pem /etc/ssl/certs/41102.pem"
	I1025 09:27:38.761464  168764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41102.pem
	I1025 09:27:38.765442  168764 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 08:37 /usr/share/ca-certificates/41102.pem
	I1025 09:27:38.765582  168764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41102.pem
	I1025 09:27:38.815569  168764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41102.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 09:27:38.825097  168764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:27:38.834281  168764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:27:38.844801  168764 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:27:38.844906  168764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:27:38.895759  168764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:27:38.903631  168764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4110.pem && ln -fs /usr/share/ca-certificates/4110.pem /etc/ssl/certs/4110.pem"
	I1025 09:27:38.912299  168764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4110.pem
	I1025 09:27:38.916211  168764 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 08:37 /usr/share/ca-certificates/4110.pem
	I1025 09:27:38.916304  168764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4110.pem
	I1025 09:27:38.965393  168764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4110.pem /etc/ssl/certs/51391683.0"
	I1025 09:27:38.973658  168764 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:27:38.977624  168764 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 09:27:39.019776  168764 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 09:27:39.061821  168764 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 09:27:39.103497  168764 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 09:27:39.144584  168764 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 09:27:39.191110  168764 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 09:27:39.232176  168764 kubeadm.go:400] StartCluster: {Name:kubernetes-upgrade-707917 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-707917 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:27:39.232266  168764 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:27:39.232335  168764 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:27:39.260233  168764 cri.go:89] found id: "bf4e471a5b488ca7cf28a3bc04eddff89e1632d40f77c4d271f14472ef7129c8"
	I1025 09:27:39.260301  168764 cri.go:89] found id: "155b666bd5611f0afd2e8ea02266eb6228ed2dd5ce8d29a9de9103d35269a59d"
	I1025 09:27:39.260313  168764 cri.go:89] found id: "e14be6f20c6a4d1308920a126fe347bd13f8e503cc80839d9dd18e6dcc0f1dfa"
	I1025 09:27:39.260318  168764 cri.go:89] found id: "ecd9b68ee75ca47b8f0c237863e410b4bebd1838dd0683b343a482379667adcf"
	I1025 09:27:39.260321  168764 cri.go:89] found id: ""
	I1025 09:27:39.260391  168764 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 09:27:39.271411  168764 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:27:39Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:27:39.271543  168764 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 09:27:39.279329  168764 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 09:27:39.279355  168764 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 09:27:39.279406  168764 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 09:27:39.286441  168764 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 09:27:39.287134  168764 kubeconfig.go:125] found "kubernetes-upgrade-707917" server: "https://192.168.76.2:8443"
	I1025 09:27:39.288015  168764 kapi.go:59] client config for kubernetes-upgrade-707917: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21796-2312/.minikube/profiles/kubernetes-upgrade-707917/client.crt", KeyFile:"/home/jenkins/minikube-integration/21796-2312/.minikube/profiles/kubernetes-upgrade-707917/client.key", CAFile:"/home/jenkins/minikube-integration/21796-2312/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 09:27:39.288516  168764 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1025 09:27:39.288534  168764 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1025 09:27:39.288539  168764 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1025 09:27:39.288544  168764 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1025 09:27:39.288549  168764 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1025 09:27:39.288855  168764 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 09:27:39.296226  168764 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1025 09:27:39.296299  168764 kubeadm.go:601] duration metric: took 16.937074ms to restartPrimaryControlPlane
	I1025 09:27:39.296314  168764 kubeadm.go:402] duration metric: took 64.147365ms to StartCluster
	I1025 09:27:39.296330  168764 settings.go:142] acquiring lock: {Name:mk0d8a31751f5e359928da7d7271367bd4b397fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:27:39.296392  168764 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21796-2312/kubeconfig
	I1025 09:27:39.297306  168764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/kubeconfig: {Name:mk29749a0edbb941c188588ea6aef25a517ab571 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:27:39.297534  168764 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:27:39.297831  168764 config.go:182] Loaded profile config "kubernetes-upgrade-707917": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:27:39.297881  168764 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 09:27:39.297945  168764 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-707917"
	I1025 09:27:39.297958  168764 addons.go:238] Setting addon storage-provisioner=true in "kubernetes-upgrade-707917"
	W1025 09:27:39.297968  168764 addons.go:247] addon storage-provisioner should already be in state true
	I1025 09:27:39.298248  168764 host.go:66] Checking if "kubernetes-upgrade-707917" exists ...
	I1025 09:27:39.298402  168764 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-707917"
	I1025 09:27:39.298425  168764 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-707917"
	I1025 09:27:39.298719  168764 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-707917 --format={{.State.Status}}
	I1025 09:27:39.298832  168764 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-707917 --format={{.State.Status}}
	I1025 09:27:39.302164  168764 out.go:179] * Verifying Kubernetes components...
	I1025 09:27:39.305311  168764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:27:39.332903  168764 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 09:27:38.842409  164447 pod_ready.go:94] pod "coredns-66bc5c9577-jwhsz" is "Ready"
	I1025 09:27:38.842434  164447 pod_ready.go:86] duration metric: took 9.006815657s for pod "coredns-66bc5c9577-jwhsz" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:27:38.849277  164447 pod_ready.go:83] waiting for pod "etcd-pause-993166" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:27:38.858343  164447 pod_ready.go:94] pod "etcd-pause-993166" is "Ready"
	I1025 09:27:38.858419  164447 pod_ready.go:86] duration metric: took 9.117254ms for pod "etcd-pause-993166" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:27:38.861462  164447 pod_ready.go:83] waiting for pod "kube-apiserver-pause-993166" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:27:38.869814  164447 pod_ready.go:94] pod "kube-apiserver-pause-993166" is "Ready"
	I1025 09:27:38.869837  164447 pod_ready.go:86] duration metric: took 8.338467ms for pod "kube-apiserver-pause-993166" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:27:38.874825  164447 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-993166" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:27:39.879906  164447 pod_ready.go:94] pod "kube-controller-manager-pause-993166" is "Ready"
	I1025 09:27:39.879936  164447 pod_ready.go:86] duration metric: took 1.005088963s for pod "kube-controller-manager-pause-993166" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:27:40.040748  164447 pod_ready.go:83] waiting for pod "kube-proxy-5rlkq" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:27:40.439227  164447 pod_ready.go:94] pod "kube-proxy-5rlkq" is "Ready"
	I1025 09:27:40.439259  164447 pod_ready.go:86] duration metric: took 398.474514ms for pod "kube-proxy-5rlkq" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:27:40.639320  164447 pod_ready.go:83] waiting for pod "kube-scheduler-pause-993166" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:27:41.039091  164447 pod_ready.go:94] pod "kube-scheduler-pause-993166" is "Ready"
	I1025 09:27:41.039118  164447 pod_ready.go:86] duration metric: took 399.771119ms for pod "kube-scheduler-pause-993166" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:27:41.039129  164447 pod_ready.go:40] duration metric: took 11.208606628s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:27:41.096308  164447 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1025 09:27:41.099813  164447 out.go:179] * Done! kubectl is now configured to use "pause-993166" cluster and "default" namespace by default
	I1025 09:27:39.335897  168764 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:27:39.335918  168764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 09:27:39.335984  168764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-707917
	I1025 09:27:39.338560  168764 kapi.go:59] client config for kubernetes-upgrade-707917: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21796-2312/.minikube/profiles/kubernetes-upgrade-707917/client.crt", KeyFile:"/home/jenkins/minikube-integration/21796-2312/.minikube/profiles/kubernetes-upgrade-707917/client.key", CAFile:"/home/jenkins/minikube-integration/21796-2312/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 09:27:39.338870  168764 addons.go:238] Setting addon default-storageclass=true in "kubernetes-upgrade-707917"
	W1025 09:27:39.338882  168764 addons.go:247] addon default-storageclass should already be in state true
	I1025 09:27:39.338906  168764 host.go:66] Checking if "kubernetes-upgrade-707917" exists ...
	I1025 09:27:39.339407  168764 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-707917 --format={{.State.Status}}
	I1025 09:27:39.361806  168764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/kubernetes-upgrade-707917/id_rsa Username:docker}
	I1025 09:27:39.387383  168764 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 09:27:39.387403  168764 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 09:27:39.387462  168764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-707917
	I1025 09:27:39.415855  168764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/kubernetes-upgrade-707917/id_rsa Username:docker}
	I1025 09:27:39.521534  168764 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:27:39.536028  168764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:27:39.542101  168764 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:27:39.542171  168764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:27:39.574095  168764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1025 09:27:39.683676  168764 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:27:39.683834  168764 retry.go:31] will retry after 315.482482ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1025 09:27:39.694343  168764 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:27:39.694376  168764 retry.go:31] will retry after 328.324201ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:27:39.999687  168764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:27:40.023481  168764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1025 09:27:40.043004  168764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 09:27:40.097371  168764 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:27:40.097405  168764 retry.go:31] will retry after 196.667511ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1025 09:27:40.127483  168764 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:27:40.127512  168764 retry.go:31] will retry after 192.743774ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:27:40.294622  168764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:27:40.321104  168764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1025 09:27:40.371641  168764 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:27:40.371670  168764 retry.go:31] will retry after 842.110478ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1025 09:27:40.408990  168764 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:27:40.409071  168764 retry.go:31] will retry after 836.571877ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:27:40.543244  168764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:27:41.043180  168764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:27:41.214944  168764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:27:41.247757  168764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1025 09:27:41.340738  168764 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:27:41.340765  168764 retry.go:31] will retry after 651.31665ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1025 09:27:41.412920  168764 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:27:41.412947  168764 retry.go:31] will retry after 657.241675ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:27:41.542267  168764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:27:41.993189  168764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:27:42.042743  168764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:27:42.070914  168764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1025 09:27:42.078299  168764 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:27:42.078332  168764 retry.go:31] will retry after 1.439824709s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1025 09:27:42.185661  168764 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:27:42.185691  168764 retry.go:31] will retry after 1.2616049s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:27:42.543169  168764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	
	
	==> CRI-O <==
	Oct 25 09:27:20 pause-993166 crio[2170]: time="2025-10-25T09:27:20.571475058Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:27:20 pause-993166 crio[2170]: time="2025-10-25T09:27:20.591663535Z" level=info msg="Starting container: e8b340e357e7745dcfdd28f2f2837779d619fbcc48c0299f33f01bc4c4338c4d" id=2b83c092-01fb-4d69-be46-9346dd617a5e name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:27:20 pause-993166 crio[2170]: time="2025-10-25T09:27:20.636297722Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:27:20 pause-993166 crio[2170]: time="2025-10-25T09:27:20.636676067Z" level=info msg="Started container" PID=2442 containerID=e8b340e357e7745dcfdd28f2f2837779d619fbcc48c0299f33f01bc4c4338c4d description=kube-system/coredns-66bc5c9577-jwhsz/coredns id=2b83c092-01fb-4d69-be46-9346dd617a5e name=/runtime.v1.RuntimeService/StartContainer sandboxID=4c6fd4ba5f139a1b04ce511c15c027a73c5ba3028c019e7ade5fd60b7ce1a37a
	Oct 25 09:27:20 pause-993166 crio[2170]: time="2025-10-25T09:27:20.638270106Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:27:20 pause-993166 crio[2170]: time="2025-10-25T09:27:20.662801971Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:27:20 pause-993166 crio[2170]: time="2025-10-25T09:27:20.663731235Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:27:20 pause-993166 crio[2170]: time="2025-10-25T09:27:20.688348164Z" level=info msg="Created container 714664c95ab9237844e38f812333c61dd85473b8f3a9fe85af446cf295917418: kube-system/kube-controller-manager-pause-993166/kube-controller-manager" id=df4c1292-56f9-49ec-b58a-479146e4065e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:27:20 pause-993166 crio[2170]: time="2025-10-25T09:27:20.689415219Z" level=info msg="Created container 47fe39f906d8d7285850fca5853bc4537e57f459e57fd794878c97272cbeb938: kube-system/etcd-pause-993166/etcd" id=fb08dc13-424d-4bf1-b975-b52909b999b5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:27:20 pause-993166 crio[2170]: time="2025-10-25T09:27:20.701098577Z" level=info msg="Starting container: 714664c95ab9237844e38f812333c61dd85473b8f3a9fe85af446cf295917418" id=2168a84e-f250-4fd0-9c60-98d2d4bad4f8 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:27:20 pause-993166 crio[2170]: time="2025-10-25T09:27:20.706915845Z" level=info msg="Starting container: 47fe39f906d8d7285850fca5853bc4537e57f459e57fd794878c97272cbeb938" id=47508fdf-ff90-4881-ac47-bf3196c18105 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:27:20 pause-993166 crio[2170]: time="2025-10-25T09:27:20.742221071Z" level=info msg="Started container" PID=2456 containerID=714664c95ab9237844e38f812333c61dd85473b8f3a9fe85af446cf295917418 description=kube-system/kube-controller-manager-pause-993166/kube-controller-manager id=2168a84e-f250-4fd0-9c60-98d2d4bad4f8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=17e9c9e4ea58af7945e7a41fb24e810eceb7da98f44acf722ec6f142324d155c
	Oct 25 09:27:20 pause-993166 crio[2170]: time="2025-10-25T09:27:20.751317864Z" level=info msg="Started container" PID=2460 containerID=47fe39f906d8d7285850fca5853bc4537e57f459e57fd794878c97272cbeb938 description=kube-system/etcd-pause-993166/etcd id=47508fdf-ff90-4881-ac47-bf3196c18105 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1015f4ce15049980ccb0a081cb52ffbea7514beced9d0552da94ce0d0e3ea3a4
	Oct 25 09:27:20 pause-993166 crio[2170]: time="2025-10-25T09:27:20.772189931Z" level=info msg="Created container 886db0afca69285c74060f184d80584ce967a64cc5c10a575ddd4e8bee524b4c: kube-system/kube-scheduler-pause-993166/kube-scheduler" id=fed59f04-ada5-4cfa-9134-36890c70b8d0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:27:20 pause-993166 crio[2170]: time="2025-10-25T09:27:20.774546174Z" level=info msg="Starting container: 886db0afca69285c74060f184d80584ce967a64cc5c10a575ddd4e8bee524b4c" id=de4bf034-e9d8-43c8-9f60-51b50a5a0fac name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:27:20 pause-993166 crio[2170]: time="2025-10-25T09:27:20.776412972Z" level=info msg="Started container" PID=2472 containerID=886db0afca69285c74060f184d80584ce967a64cc5c10a575ddd4e8bee524b4c description=kube-system/kube-scheduler-pause-993166/kube-scheduler id=de4bf034-e9d8-43c8-9f60-51b50a5a0fac name=/runtime.v1.RuntimeService/StartContainer sandboxID=c16a2000d5672fb102bfe80a3520b2ce21374b8e6400f4fdb805e97a98f3dcfe
	Oct 25 09:27:21 pause-993166 crio[2170]: time="2025-10-25T09:27:21.106214409Z" level=info msg="Created container 2ca61daa9f640085335cff12d469a97aac33ed0ac86bb44265dd873b1b88ea7b: kube-system/kube-proxy-5rlkq/kube-proxy" id=9502902d-3836-46fd-9e20-48f819e015e0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:27:21 pause-993166 crio[2170]: time="2025-10-25T09:27:21.110244199Z" level=info msg="Starting container: 2ca61daa9f640085335cff12d469a97aac33ed0ac86bb44265dd873b1b88ea7b" id=070a4e77-d7e7-4a0f-b1c6-851d4e16b15a name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:27:21 pause-993166 crio[2170]: time="2025-10-25T09:27:21.118235249Z" level=info msg="Started container" PID=2477 containerID=2ca61daa9f640085335cff12d469a97aac33ed0ac86bb44265dd873b1b88ea7b description=kube-system/kube-proxy-5rlkq/kube-proxy id=070a4e77-d7e7-4a0f-b1c6-851d4e16b15a name=/runtime.v1.RuntimeService/StartContainer sandboxID=a86557571f7c803429c813e1c340799b2c30df8af0b9521663450da16d0b48c2
	Oct 25 09:27:25 pause-993166 crio[2170]: time="2025-10-25T09:27:25.2349325Z" level=info msg="Removing container: 87c5a3facf18119b6c304958a3ed79367860f43b651be0b4a9b1c500597d43cd" id=dc5a2620-211d-45b1-a614-f4edf4cc7a00 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 09:27:25 pause-993166 crio[2170]: time="2025-10-25T09:27:25.260179526Z" level=info msg="Removed container 87c5a3facf18119b6c304958a3ed79367860f43b651be0b4a9b1c500597d43cd: kube-system/kube-apiserver-pause-993166/kube-apiserver" id=dc5a2620-211d-45b1-a614-f4edf4cc7a00 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 09:27:25 pause-993166 crio[2170]: time="2025-10-25T09:27:25.271110246Z" level=info msg="Removing container: f563b86847be715f70af152a0fbab4b59318f54cbe306e06323d0f39adeec6dd" id=c720bbec-8e94-40e8-ac62-da78c07d2852 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 09:27:25 pause-993166 crio[2170]: time="2025-10-25T09:27:25.301708235Z" level=info msg="Removed container f563b86847be715f70af152a0fbab4b59318f54cbe306e06323d0f39adeec6dd: kube-system/etcd-pause-993166/etcd" id=c720bbec-8e94-40e8-ac62-da78c07d2852 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 09:27:25 pause-993166 crio[2170]: time="2025-10-25T09:27:25.320235585Z" level=info msg="Removing container: 169848acc3e0cecb70686ebdfdf92cf360efc3648f67252350f18a44c7a011cd" id=ccba1f3d-c6e8-44b2-8ae7-c91fe115e99a name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 09:27:25 pause-993166 crio[2170]: time="2025-10-25T09:27:25.339375049Z" level=info msg="Removed container 169848acc3e0cecb70686ebdfdf92cf360efc3648f67252350f18a44c7a011cd: kube-system/kube-controller-manager-pause-993166/kube-controller-manager" id=ccba1f3d-c6e8-44b2-8ae7-c91fe115e99a name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	2ca61daa9f640       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   26 seconds ago      Running             kube-proxy                1                   a86557571f7c8       kube-proxy-5rlkq                       kube-system
	886db0afca692       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   26 seconds ago      Running             kube-scheduler            2                   c16a2000d5672       kube-scheduler-pause-993166            kube-system
	47fe39f906d8d       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   26 seconds ago      Running             etcd                      2                   1015f4ce15049       etcd-pause-993166                      kube-system
	714664c95ab92       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   26 seconds ago      Running             kube-controller-manager   2                   17e9c9e4ea58a       kube-controller-manager-pause-993166   kube-system
	e8b340e357e77       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   26 seconds ago      Running             coredns                   2                   4c6fd4ba5f139       coredns-66bc5c9577-jwhsz               kube-system
	8ffe0ddc38b5c       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   26 seconds ago      Running             kube-apiserver            2                   9c54e466d03dc       kube-apiserver-pause-993166            kube-system
	76e14f6fbf01f       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   2 minutes ago       Created             coredns                   1                   4c6fd4ba5f139       coredns-66bc5c9577-jwhsz               kube-system
	2096c30fd3aa9       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   2 minutes ago       Running             kindnet-cni               1                   ca4d7c18a9522       kindnet-f8dj2                          kube-system
	8c0bc05f35e5c       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   2 minutes ago       Exited              kube-apiserver            1                   9c54e466d03dc       kube-apiserver-pause-993166            kube-system
	f33ce8d9cda08       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   2 minutes ago       Created             kube-scheduler            1                   c16a2000d5672       kube-scheduler-pause-993166            kube-system
	69577785cdf00       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   2 minutes ago       Exited              kube-controller-manager   1                   17e9c9e4ea58a       kube-controller-manager-pause-993166   kube-system
	0bcb1e61aa495       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   2 minutes ago       Exited              etcd                      1                   1015f4ce15049       etcd-pause-993166                      kube-system
	eef32253d3c56       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   2 minutes ago       Exited              coredns                   0                   4c6fd4ba5f139       coredns-66bc5c9577-jwhsz               kube-system
	0e45c49e84a25       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   2 minutes ago       Exited              kindnet-cni               0                   ca4d7c18a9522       kindnet-f8dj2                          kube-system
	1f115f72025a0       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   2 minutes ago       Exited              kube-proxy                0                   a86557571f7c8       kube-proxy-5rlkq                       kube-system
	ffc064adb3057       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   3 minutes ago       Exited              kube-scheduler            0                   c16a2000d5672       kube-scheduler-pause-993166            kube-system
	
	
	==> coredns [76e14f6fbf01f85f84b7dbe2758815257045635f805bae17697c1057870d2e45] <==
	
	
	==> coredns [e8b340e357e7745dcfdd28f2f2837779d619fbcc48c0299f33f01bc4c4338c4d] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34034 - 31432 "HINFO IN 3128138462265537033.2385866799071473184. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024969278s
	
	
	==> coredns [eef32253d3c56e41d04f3cfb281703e63313570f9ae5713544b1d85e07c65fdb] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40878 - 41302 "HINFO IN 4088001627319950776.616337305452426126. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.010883658s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-993166
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-993166
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3de5ce1075e776fa55a26fe8396669cc53a4373
	                    minikube.k8s.io/name=pause-993166
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_24_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:24:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-993166
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:27:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:27:38 +0000   Sat, 25 Oct 2025 09:24:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:27:38 +0000   Sat, 25 Oct 2025 09:24:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:27:38 +0000   Sat, 25 Oct 2025 09:24:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:27:38 +0000   Sat, 25 Oct 2025 09:27:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-993166
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                b714354e-bb8e-4060-a76c-5fcc136f8956
	  Boot ID:                    89aa6167-e700-462e-8969-7739a9f33dc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-jwhsz                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m54s
	  kube-system                 etcd-pause-993166                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m59s
	  kube-system                 kindnet-f8dj2                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m54s
	  kube-system                 kube-apiserver-pause-993166             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m59s
	  kube-system                 kube-controller-manager-pause-993166    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m59s
	  kube-system                 kube-proxy-5rlkq                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m54s
	  kube-system                 kube-scheduler-pause-993166             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 2m52s                kube-proxy       
	  Normal   Starting                 18s                  kube-proxy       
	  Warning  CgroupV1                 3m8s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  3m8s (x8 over 3m8s)  kubelet          Node pause-993166 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m8s (x8 over 3m8s)  kubelet          Node pause-993166 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m8s (x8 over 3m8s)  kubelet          Node pause-993166 status is now: NodeHasSufficientPID
	  Normal   Starting                 3m                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 3m                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m59s                kubelet          Node pause-993166 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m59s                kubelet          Node pause-993166 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m59s                kubelet          Node pause-993166 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m55s                node-controller  Node pause-993166 event: Registered Node pause-993166 in Controller
	  Warning  ContainerGCFailed        60s (x2 over 2m)     kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             26s (x8 over 98s)    kubelet          Node pause-993166 status is now: NodeNotReady
	  Normal   RegisteredNode           16s                  node-controller  Node pause-993166 event: Registered Node pause-993166 in Controller
	  Normal   NodeReady                9s (x2 over 2m13s)   kubelet          Node pause-993166 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct25 08:59] overlayfs: idmapped layers are currently not supported
	[Oct25 09:00] overlayfs: idmapped layers are currently not supported
	[  +5.088983] overlayfs: idmapped layers are currently not supported
	[ +51.199451] overlayfs: idmapped layers are currently not supported
	[Oct25 09:01] overlayfs: idmapped layers are currently not supported
	[Oct25 09:02] overlayfs: idmapped layers are currently not supported
	[Oct25 09:07] overlayfs: idmapped layers are currently not supported
	[Oct25 09:08] overlayfs: idmapped layers are currently not supported
	[Oct25 09:09] overlayfs: idmapped layers are currently not supported
	[Oct25 09:10] overlayfs: idmapped layers are currently not supported
	[Oct25 09:11] overlayfs: idmapped layers are currently not supported
	[Oct25 09:13] overlayfs: idmapped layers are currently not supported
	[ +18.632418] overlayfs: idmapped layers are currently not supported
	[ +13.271347] overlayfs: idmapped layers are currently not supported
	[Oct25 09:14] overlayfs: idmapped layers are currently not supported
	[Oct25 09:15] overlayfs: idmapped layers are currently not supported
	[ +40.253798] overlayfs: idmapped layers are currently not supported
	[Oct25 09:16] overlayfs: idmapped layers are currently not supported
	[Oct25 09:17] overlayfs: idmapped layers are currently not supported
	[Oct25 09:18] overlayfs: idmapped layers are currently not supported
	[  +9.191019] hrtimer: interrupt took 13462946 ns
	[Oct25 09:20] overlayfs: idmapped layers are currently not supported
	[Oct25 09:22] overlayfs: idmapped layers are currently not supported
	[Oct25 09:23] overlayfs: idmapped layers are currently not supported
	[Oct25 09:24] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [0bcb1e61aa4954f93376eff6871c63bd7fef85c6400a8063b3d8ccb280fc9dec] <==
	{"level":"warn","ts":"2025-10-25T09:25:46.400603Z","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"warn","ts":"2025-10-25T09:25:46.400697Z","caller":"etcdmain/config.go:270","msg":"--snapshot-count is deprecated in 3.6 and will be decommissioned in 3.7."}
	{"level":"info","ts":"2025-10-25T09:25:46.400711Z","caller":"etcdmain/etcd.go:64","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.85.2:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--feature-gates=InitialCorruptCheck=true","--initial-advertise-peer-urls=https://192.168.85.2:2380","--initial-cluster=pause-993166=https://192.168.85.2:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.85.2:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.85.2:2380","--name=pause-993166","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--watch-progress-notify-interval=5s"]}
	{"level":"info","ts":"2025-10-25T09:25:46.400787Z","caller":"etcdmain/etcd.go:107","msg":"server has already been initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	{"level":"warn","ts":"2025-10-25T09:25:46.400802Z","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2025-10-25T09:25:46.400814Z","caller":"embed/etcd.go:138","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-10-25T09:25:46.400835Z","caller":"embed/etcd.go:544","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-25T09:25:46.401258Z","caller":"embed/etcd.go:146","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"]}
	{"level":"info","ts":"2025-10-25T09:25:46.401365Z","caller":"embed/etcd.go:323","msg":"starting an etcd server","etcd-version":"3.6.4","git-sha":"5400cdc39","go-version":"go1.23.11","go-os":"linux","go-arch":"arm64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"pause-993166","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"experimental-local-address":"","cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-s
tate":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"feature-gates":"InitialCorruptCheck=true","initial-corrupt-check":false,"corrupt-check-time-interval":"0s","compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","discovery-token":"","discovery-endpoints":"","discovery-dial-timeout":"2s","discovery-request-timeout":"5s","discovery-keepalive-time":"2s","discovery-keepalive-timeout":"6s","discovery-insecure-transport":true,"discovery-insecure-skip-tls-verify":false,"discovery-cert":"","discovery-key":"","discovery-cacert":"","discovery-user":"","downgrade-check-interval":"5s","max-learners":1,"v2-deprecation":"write-only"}
	{"level":"info","ts":"2025-10-25T09:25:46.402019Z","logger":"bbolt","caller":"backend/backend.go:203","msg":"Opening db file (/var/lib/minikube/etcd/member/snap/db) with mode -rw------- and with options: {Timeout: 0s, NoGrowSync: false, NoFreelistSync: true, PreLoadFreelist: false, FreelistType: hashmap, ReadOnly: false, MmapFlags: 8000, InitialMmapSize: 10737418240, PageSize: 0, NoSync: false, OpenFile: 0x0, Mlock: false, Logger: 0x40003340e8}"}
	
	
	==> etcd [47fe39f906d8d7285850fca5853bc4537e57f459e57fd794878c97272cbeb938] <==
	{"level":"warn","ts":"2025-10-25T09:27:25.586813Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:27:25.684487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:27:25.721725Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:27:25.787975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:27:25.821359Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:27:25.894626Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:27:25.941033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:27:26.002132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:27:26.037650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:27:26.076704Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:27:26.146725Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:27:26.161415Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:27:26.197071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:27:26.236191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:27:26.266222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:27:26.330263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:27:26.440426Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:27:26.440619Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:27:26.511387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:27:26.544574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:27:26.594970Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:27:26.632537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:27:26.657520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:27:26.678543Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:27:26.922771Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58602","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:27:47 up  1:10,  0 user,  load average: 2.81, 2.46, 2.29
	Linux pause-993166 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0e45c49e84a25010e320a542407973a38a5f8065f7093dd7b3d26e2e6c546c62] <==
	I1025 09:24:54.320974       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 09:24:54.414221       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1025 09:24:54.414359       1 main.go:148] setting mtu 1500 for CNI 
	I1025 09:24:54.414378       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 09:24:54.414390       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T09:24:54Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 09:24:54.524901       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 09:24:54.525021       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 09:24:54.525055       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 09:24:54.525379       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1025 09:25:24.525312       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1025 09:25:24.525315       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1025 09:25:24.525411       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1025 09:25:24.615041       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1025 09:25:26.125575       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 09:25:26.125608       1 metrics.go:72] Registering metrics
	I1025 09:25:26.125683       1 controller.go:711] "Syncing nftables rules"
	I1025 09:25:34.530163       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 09:25:34.530300       1 main.go:301] handling current node
	
	
	==> kindnet [2096c30fd3aa94582633e4db5513e39832fe004de98978844e87c7666828be6d] <==
	E1025 09:25:46.818181       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1025 09:25:47.579111       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1025 09:25:47.752456       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1025 09:25:47.790452       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1025 09:25:48.161381       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1025 09:25:50.201487       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1025 09:25:50.276609       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1025 09:25:50.690518       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1025 09:25:51.021367       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1025 09:25:55.576732       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1025 09:25:55.670679       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1025 09:25:56.540755       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1025 09:25:56.772125       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1025 09:26:03.174322       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1025 09:26:04.729518       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1025 09:26:05.115230       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1025 09:26:07.721792       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1025 09:26:20.153753       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1025 09:26:20.477431       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1025 09:26:24.374928       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1025 09:26:31.093664       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1025 09:26:47.370962       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1025 09:26:50.469821       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1025 09:27:10.596995       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1025 09:27:12.596880       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	
	
	==> kube-apiserver [8c0bc05f35e5cfd23f679343fe56282677202e373ae2cbd191ee4a80dd1cc492] <==
	
	
	==> kube-apiserver [8ffe0ddc38b5cd6b1ec6e998b477c88dfe3de5017eec479555b1a02a271662c6] <==
	I1025 09:27:28.324044       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1025 09:27:28.324061       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1025 09:27:28.324318       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1025 09:27:28.324363       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1025 09:27:28.324443       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1025 09:27:28.324486       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1025 09:27:28.345939       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1025 09:27:28.346117       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1025 09:27:28.347365       1 aggregator.go:171] initial CRD sync complete...
	I1025 09:27:28.347391       1 autoregister_controller.go:144] Starting autoregister controller
	I1025 09:27:28.347397       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 09:27:28.347404       1 cache.go:39] Caches are synced for autoregister controller
	I1025 09:27:28.376800       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:27:28.417084       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 09:27:28.422509       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 09:27:28.422751       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1025 09:27:28.475440       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1025 09:27:28.506323       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1025 09:27:28.925326       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 09:27:30.194881       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 09:27:31.833746       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 09:27:31.860597       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1025 09:27:31.908060       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 09:27:31.975946       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 09:27:32.060187       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [69577785cdf0018c99ee3138be8b8466664873956fae9df607ab4b9f0211856b] <==
	
	
	==> kube-controller-manager [714664c95ab9237844e38f812333c61dd85473b8f3a9fe85af446cf295917418] <==
	I1025 09:27:31.711484       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1025 09:27:31.711592       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1025 09:27:31.711606       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1025 09:27:31.711615       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1025 09:27:31.711625       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1025 09:27:31.711634       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1025 09:27:31.711645       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1025 09:27:31.711671       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:27:31.713427       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1025 09:27:31.713673       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1025 09:27:31.724141       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 09:27:31.724812       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1025 09:27:31.728639       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1025 09:27:31.732823       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1025 09:27:31.733598       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1025 09:27:31.742798       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1025 09:27:31.745925       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1025 09:27:31.750363       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1025 09:27:31.750477       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1025 09:27:31.750835       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1025 09:27:31.750895       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1025 09:27:31.750911       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 09:27:31.757084       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1025 09:27:31.762621       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:27:41.695185       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [1f115f72025a0c69095b7e23981b889c5f6b849f9233c4dc87b8320007c8dc3a] <==
	I1025 09:24:54.294871       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:24:54.375955       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:24:54.477517       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:24:54.477633       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1025 09:24:54.477747       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:24:54.495902       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:24:54.495962       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:24:54.499894       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:24:54.500202       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:24:54.500225       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:24:54.501896       1 config.go:200] "Starting service config controller"
	I1025 09:24:54.501970       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:24:54.502171       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:24:54.502200       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:24:54.502236       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:24:54.502261       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:24:54.503220       1 config.go:309] "Starting node config controller"
	I1025 09:24:54.504535       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:24:54.504828       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:24:54.602286       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 09:24:54.602372       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 09:24:54.602615       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [2ca61daa9f640085335cff12d469a97aac33ed0ac86bb44265dd873b1b88ea7b] <==
	I1025 09:27:27.199426       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:27:27.331774       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:27:28.479033       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:27:28.479136       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1025 09:27:28.479241       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:27:28.847704       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:27:28.848988       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:27:28.889380       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:27:28.889850       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:27:28.890284       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:27:28.900173       1 config.go:200] "Starting service config controller"
	I1025 09:27:28.900212       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:27:28.900239       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:27:28.900248       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:27:28.900381       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:27:28.900394       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:27:28.913058       1 config.go:309] "Starting node config controller"
	I1025 09:27:28.913155       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:27:29.004897       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 09:27:29.005050       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 09:27:29.005379       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 09:27:29.013936       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [886db0afca69285c74060f184d80584ce967a64cc5c10a575ddd4e8bee524b4c] <==
	I1025 09:27:23.323419       1 serving.go:386] Generated self-signed cert in-memory
	W1025 09:27:28.258225       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 09:27:28.258333       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 09:27:28.258366       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 09:27:28.258415       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 09:27:28.434130       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 09:27:28.434230       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:27:28.442648       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:27:28.442947       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:27:28.442913       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 09:27:28.442892       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 09:27:28.552419       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [f33ce8d9cda081620ca6bb2e65c2a49aa70fd1d8d5e3fe5766fdce8e06ebedba] <==
	
	
	==> kube-scheduler [ffc064adb3057dcbcb7e698a5374601d1a883faa933a2d0a24564611c6950319] <==
	I1025 09:24:43.660474       1 serving.go:386] Generated self-signed cert in-memory
	W1025 09:24:46.474601       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 09:24:46.474707       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 09:24:46.475237       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 09:24:46.475304       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 09:24:46.540666       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 09:24:46.540758       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:24:46.548571       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 09:24:46.548682       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:24:46.557739       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:24:46.548756       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1025 09:24:46.594433       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1025 09:24:47.458580       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:25:39.481280       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1025 09:25:39.481458       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1025 09:25:39.481472       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1025 09:25:39.481545       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:25:39.481572       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1025 09:25:39.481619       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 25 09:27:20 pause-993166 kubelet[1302]: E1025 09:27:20.429673    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-993166\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="2313188a20123f8b90114e44acf6d422" pod="kube-system/kube-controller-manager-pause-993166"
	Oct 25 09:27:20 pause-993166 kubelet[1302]: E1025 09:27:20.429964    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-993166\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="2313188a20123f8b90114e44acf6d422" pod="kube-system/kube-controller-manager-pause-993166"
	Oct 25 09:27:20 pause-993166 kubelet[1302]: E1025 09:27:20.431096    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5rlkq\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="18988bb4-0b22-4e84-99ed-c40fd8525128" pod="kube-system/kube-proxy-5rlkq"
	Oct 25 09:27:20 pause-993166 kubelet[1302]: E1025 09:27:20.432418    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-f8dj2\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="0a881c6a-52f1-4a77-a887-6a1f589c8605" pod="kube-system/kindnet-f8dj2"
	Oct 25 09:27:20 pause-993166 kubelet[1302]: E1025 09:27:20.432711    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-jwhsz\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="4f925e04-50bc-46af-9c96-c0ec0fb36a26" pod="kube-system/coredns-66bc5c9577-jwhsz"
	Oct 25 09:27:20 pause-993166 kubelet[1302]: E1025 09:27:20.432931    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-993166\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="eea85baa70a68cd406c509e5652fe75c" pod="kube-system/etcd-pause-993166"
	Oct 25 09:27:20 pause-993166 kubelet[1302]: E1025 09:27:20.433176    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-993166\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="b46b3bb615275f687eb03acd77879e8a" pod="kube-system/kube-scheduler-pause-993166"
	Oct 25 09:27:20 pause-993166 kubelet[1302]: E1025 09:27:20.433451    1302 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-993166\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="b9fc1ab0a251d338e1d857eab8b4410f" pod="kube-system/kube-apiserver-pause-993166"
	Oct 25 09:27:21 pause-993166 kubelet[1302]: I1025 09:27:21.210950    1302 setters.go:543] "Node became not ready" node="pause-993166" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-25T09:27:21Z","lastTransitionTime":"2025-10-25T09:27:21Z","reason":"KubeletNotReady","message":"container runtime is down"}
	Oct 25 09:27:25 pause-993166 kubelet[1302]: I1025 09:27:25.229439    1302 scope.go:117] "RemoveContainer" containerID="87c5a3facf18119b6c304958a3ed79367860f43b651be0b4a9b1c500597d43cd"
	Oct 25 09:27:25 pause-993166 kubelet[1302]: I1025 09:27:25.236899    1302 status_manager.go:507] "Container startup changed for unknown container" pod="kube-system/kube-apiserver-pause-993166" containerID="cri-o://8c0bc05f35e5cfd23f679343fe56282677202e373ae2cbd191ee4a80dd1cc492"
	Oct 25 09:27:25 pause-993166 kubelet[1302]: I1025 09:27:25.244278    1302 status_manager.go:507] "Container startup changed for unknown container" pod="kube-system/etcd-pause-993166" containerID="cri-o://0bcb1e61aa4954f93376eff6871c63bd7fef85c6400a8063b3d8ccb280fc9dec"
	Oct 25 09:27:25 pause-993166 kubelet[1302]: I1025 09:27:25.244550    1302 status_manager.go:444] "Container readiness changed for unknown container" pod="kube-system/etcd-pause-993166" containerID="cri-o://0bcb1e61aa4954f93376eff6871c63bd7fef85c6400a8063b3d8ccb280fc9dec"
	Oct 25 09:27:25 pause-993166 kubelet[1302]: I1025 09:27:25.244692    1302 status_manager.go:444] "Container readiness changed for unknown container" pod="kube-system/kube-apiserver-pause-993166" containerID="cri-o://8c0bc05f35e5cfd23f679343fe56282677202e373ae2cbd191ee4a80dd1cc492"
	Oct 25 09:27:25 pause-993166 kubelet[1302]: I1025 09:27:25.260994    1302 scope.go:117] "RemoveContainer" containerID="f563b86847be715f70af152a0fbab4b59318f54cbe306e06323d0f39adeec6dd"
	Oct 25 09:27:25 pause-993166 kubelet[1302]: I1025 09:27:25.306231    1302 scope.go:117] "RemoveContainer" containerID="169848acc3e0cecb70686ebdfdf92cf360efc3648f67252350f18a44c7a011cd"
	Oct 25 09:27:28 pause-993166 kubelet[1302]: E1025 09:27:28.053376    1302 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-jwhsz\" is forbidden: User \"system:node:pause-993166\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-993166' and this object" podUID="4f925e04-50bc-46af-9c96-c0ec0fb36a26" pod="kube-system/coredns-66bc5c9577-jwhsz"
	Oct 25 09:27:28 pause-993166 kubelet[1302]: E1025 09:27:28.059345    1302 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-993166\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-993166' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Oct 25 09:27:28 pause-993166 kubelet[1302]: E1025 09:27:28.140728    1302 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-993166\" is forbidden: User \"system:node:pause-993166\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-993166' and this object" podUID="eea85baa70a68cd406c509e5652fe75c" pod="kube-system/etcd-pause-993166"
	Oct 25 09:27:28 pause-993166 kubelet[1302]: W1025 09:27:28.195498    1302 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Oct 25 09:27:28 pause-993166 kubelet[1302]: E1025 09:27:28.250898    1302 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-993166\" is forbidden: User \"system:node:pause-993166\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-993166' and this object" podUID="b46b3bb615275f687eb03acd77879e8a" pod="kube-system/kube-scheduler-pause-993166"
	Oct 25 09:27:38 pause-993166 kubelet[1302]: W1025 09:27:38.229975    1302 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Oct 25 09:27:41 pause-993166 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 09:27:41 pause-993166 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 09:27:41 pause-993166 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-993166 -n pause-993166
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-993166 -n pause-993166: exit status 2 (403.044878ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-993166 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (7.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-881642 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-881642 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (291.282532ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:31:00Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-881642 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-881642 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-881642 describe deploy/metrics-server -n kube-system: exit status 1 (86.98873ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-881642 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-881642
helpers_test.go:243: (dbg) docker inspect old-k8s-version-881642:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e27d1cd7e425155b8d954cdf2863c1fb9aa2eb9c8de7b81cc7d6ed7da8c49306",
	        "Created": "2025-10-25T09:29:58.440367349Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 182982,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:29:58.541191139Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/e27d1cd7e425155b8d954cdf2863c1fb9aa2eb9c8de7b81cc7d6ed7da8c49306/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e27d1cd7e425155b8d954cdf2863c1fb9aa2eb9c8de7b81cc7d6ed7da8c49306/hostname",
	        "HostsPath": "/var/lib/docker/containers/e27d1cd7e425155b8d954cdf2863c1fb9aa2eb9c8de7b81cc7d6ed7da8c49306/hosts",
	        "LogPath": "/var/lib/docker/containers/e27d1cd7e425155b8d954cdf2863c1fb9aa2eb9c8de7b81cc7d6ed7da8c49306/e27d1cd7e425155b8d954cdf2863c1fb9aa2eb9c8de7b81cc7d6ed7da8c49306-json.log",
	        "Name": "/old-k8s-version-881642",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-881642:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-881642",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e27d1cd7e425155b8d954cdf2863c1fb9aa2eb9c8de7b81cc7d6ed7da8c49306",
	                "LowerDir": "/var/lib/docker/overlay2/6df03c5857d4595a180f3a88a7c703bebe35718ada3a63bcd7f20b5908953f91-init/diff:/var/lib/docker/overlay2/cef79fc16ca3e257f9c9390d34fae091a7f7fa219913b2606caf1922ea50ed93/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6df03c5857d4595a180f3a88a7c703bebe35718ada3a63bcd7f20b5908953f91/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6df03c5857d4595a180f3a88a7c703bebe35718ada3a63bcd7f20b5908953f91/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6df03c5857d4595a180f3a88a7c703bebe35718ada3a63bcd7f20b5908953f91/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-881642",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-881642/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-881642",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-881642",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-881642",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8cb5b8d8636ff069041e4a0c12868a32a9065f546945bd603fd978f14ecf0908",
	            "SandboxKey": "/var/run/docker/netns/8cb5b8d8636f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33048"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33049"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33052"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33050"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33051"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-881642": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "86:04:1c:d0:c6:10",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "917e141362c271bc727ce35937091cd630c8eec9e1077a440c52d3089c688f49",
	                    "EndpointID": "b2d9486d62525d369af44944719685dd121830e605124df7ad7d9ba5c2f333a1",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-881642",
	                        "e27d1cd7e425"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-881642 -n old-k8s-version-881642
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-881642 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-881642 logs -n 25: (1.218404306s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-068349 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-068349             │ jenkins │ v1.37.0 │ 25 Oct 25 09:28 UTC │                     │
	│ ssh     │ -p cilium-068349 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-068349             │ jenkins │ v1.37.0 │ 25 Oct 25 09:28 UTC │                     │
	│ ssh     │ -p cilium-068349 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-068349             │ jenkins │ v1.37.0 │ 25 Oct 25 09:28 UTC │                     │
	│ ssh     │ -p cilium-068349 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-068349             │ jenkins │ v1.37.0 │ 25 Oct 25 09:28 UTC │                     │
	│ ssh     │ -p cilium-068349 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-068349             │ jenkins │ v1.37.0 │ 25 Oct 25 09:28 UTC │                     │
	│ ssh     │ -p cilium-068349 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-068349             │ jenkins │ v1.37.0 │ 25 Oct 25 09:28 UTC │                     │
	│ ssh     │ -p cilium-068349 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-068349             │ jenkins │ v1.37.0 │ 25 Oct 25 09:28 UTC │                     │
	│ ssh     │ -p cilium-068349 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-068349             │ jenkins │ v1.37.0 │ 25 Oct 25 09:28 UTC │                     │
	│ ssh     │ -p cilium-068349 sudo containerd config dump                                                                                                                                                                                                  │ cilium-068349             │ jenkins │ v1.37.0 │ 25 Oct 25 09:28 UTC │                     │
	│ ssh     │ -p cilium-068349 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-068349             │ jenkins │ v1.37.0 │ 25 Oct 25 09:28 UTC │                     │
	│ ssh     │ -p cilium-068349 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-068349             │ jenkins │ v1.37.0 │ 25 Oct 25 09:28 UTC │                     │
	│ ssh     │ -p cilium-068349 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-068349             │ jenkins │ v1.37.0 │ 25 Oct 25 09:28 UTC │                     │
	│ ssh     │ -p cilium-068349 sudo crio config                                                                                                                                                                                                             │ cilium-068349             │ jenkins │ v1.37.0 │ 25 Oct 25 09:28 UTC │                     │
	│ delete  │ -p cilium-068349                                                                                                                                                                                                                              │ cilium-068349             │ jenkins │ v1.37.0 │ 25 Oct 25 09:28 UTC │ 25 Oct 25 09:28 UTC │
	│ start   │ -p force-systemd-env-991333 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-991333  │ jenkins │ v1.37.0 │ 25 Oct 25 09:28 UTC │ 25 Oct 25 09:29 UTC │
	│ ssh     │ force-systemd-flag-100847 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-100847 │ jenkins │ v1.37.0 │ 25 Oct 25 09:28 UTC │ 25 Oct 25 09:28 UTC │
	│ delete  │ -p force-systemd-flag-100847                                                                                                                                                                                                                  │ force-systemd-flag-100847 │ jenkins │ v1.37.0 │ 25 Oct 25 09:28 UTC │ 25 Oct 25 09:28 UTC │
	│ start   │ -p cert-expiration-440252 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-440252    │ jenkins │ v1.37.0 │ 25 Oct 25 09:28 UTC │ 25 Oct 25 09:29 UTC │
	│ delete  │ -p force-systemd-env-991333                                                                                                                                                                                                                   │ force-systemd-env-991333  │ jenkins │ v1.37.0 │ 25 Oct 25 09:29 UTC │ 25 Oct 25 09:29 UTC │
	│ start   │ -p cert-options-483456 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-483456       │ jenkins │ v1.37.0 │ 25 Oct 25 09:29 UTC │ 25 Oct 25 09:29 UTC │
	│ ssh     │ cert-options-483456 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-483456       │ jenkins │ v1.37.0 │ 25 Oct 25 09:29 UTC │ 25 Oct 25 09:29 UTC │
	│ ssh     │ -p cert-options-483456 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-483456       │ jenkins │ v1.37.0 │ 25 Oct 25 09:29 UTC │ 25 Oct 25 09:29 UTC │
	│ delete  │ -p cert-options-483456                                                                                                                                                                                                                        │ cert-options-483456       │ jenkins │ v1.37.0 │ 25 Oct 25 09:29 UTC │ 25 Oct 25 09:29 UTC │
	│ start   │ -p old-k8s-version-881642 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-881642    │ jenkins │ v1.37.0 │ 25 Oct 25 09:29 UTC │ 25 Oct 25 09:30 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-881642 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-881642    │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:29:52
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:29:52.483106  182527 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:29:52.483585  182527 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:29:52.483600  182527 out.go:374] Setting ErrFile to fd 2...
	I1025 09:29:52.483605  182527 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:29:52.484325  182527 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-2312/.minikube/bin
	I1025 09:29:52.484968  182527 out.go:368] Setting JSON to false
	I1025 09:29:52.485906  182527 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4344,"bootTime":1761380249,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1025 09:29:52.486138  182527 start.go:141] virtualization:  
	I1025 09:29:52.490232  182527 out.go:179] * [old-k8s-version-881642] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 09:29:52.492894  182527 out.go:179]   - MINIKUBE_LOCATION=21796
	I1025 09:29:52.492961  182527 notify.go:220] Checking for updates...
	I1025 09:29:52.499967  182527 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:29:52.503387  182527 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21796-2312/kubeconfig
	I1025 09:29:52.506803  182527 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-2312/.minikube
	I1025 09:29:52.510128  182527 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 09:29:52.513275  182527 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:29:52.516914  182527 config.go:182] Loaded profile config "cert-expiration-440252": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:29:52.517024  182527 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:29:52.541400  182527 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 09:29:52.541520  182527 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:29:52.598326  182527 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 09:29:52.588338595 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:29:52.598435  182527 docker.go:318] overlay module found
	I1025 09:29:52.601769  182527 out.go:179] * Using the docker driver based on user configuration
	I1025 09:29:52.605538  182527 start.go:305] selected driver: docker
	I1025 09:29:52.605562  182527 start.go:925] validating driver "docker" against <nil>
	I1025 09:29:52.605575  182527 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:29:52.606408  182527 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:29:52.664262  182527 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 09:29:52.654329998 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:29:52.664421  182527 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 09:29:52.664672  182527 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:29:52.667820  182527 out.go:179] * Using Docker driver with root privileges
	I1025 09:29:52.670944  182527 cni.go:84] Creating CNI manager for ""
	I1025 09:29:52.671020  182527 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:29:52.671036  182527 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 09:29:52.671122  182527 start.go:349] cluster config:
	{Name:old-k8s-version-881642 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-881642 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:29:52.674360  182527 out.go:179] * Starting "old-k8s-version-881642" primary control-plane node in "old-k8s-version-881642" cluster
	I1025 09:29:52.677496  182527 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 09:29:52.680517  182527 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:29:52.683357  182527 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1025 09:29:52.683444  182527 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21796-2312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1025 09:29:52.683460  182527 cache.go:58] Caching tarball of preloaded images
	I1025 09:29:52.683467  182527 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:29:52.683567  182527 preload.go:233] Found /home/jenkins/minikube-integration/21796-2312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 09:29:52.683578  182527 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1025 09:29:52.683685  182527 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/old-k8s-version-881642/config.json ...
	I1025 09:29:52.683709  182527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/old-k8s-version-881642/config.json: {Name:mkbca8e496e62eb059bd7a94ab3d0784fcccaf64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:29:52.703864  182527 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 09:29:52.703892  182527 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 09:29:52.703914  182527 cache.go:232] Successfully downloaded all kic artifacts
	I1025 09:29:52.703937  182527 start.go:360] acquireMachinesLock for old-k8s-version-881642: {Name:mk53d2a9d41389ec3c13b0a322d6d58e886d5c15 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:29:52.704665  182527 start.go:364] duration metric: took 706.702µs to acquireMachinesLock for "old-k8s-version-881642"
	I1025 09:29:52.704701  182527 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-881642 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-881642 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:29:52.704777  182527 start.go:125] createHost starting for "" (driver="docker")
	I1025 09:29:52.708139  182527 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1025 09:29:52.708379  182527 start.go:159] libmachine.API.Create for "old-k8s-version-881642" (driver="docker")
	I1025 09:29:52.708429  182527 client.go:168] LocalClient.Create starting
	I1025 09:29:52.708511  182527 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem
	I1025 09:29:52.708551  182527 main.go:141] libmachine: Decoding PEM data...
	I1025 09:29:52.708568  182527 main.go:141] libmachine: Parsing certificate...
	I1025 09:29:52.708623  182527 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem
	I1025 09:29:52.708644  182527 main.go:141] libmachine: Decoding PEM data...
	I1025 09:29:52.708654  182527 main.go:141] libmachine: Parsing certificate...
	I1025 09:29:52.709001  182527 cli_runner.go:164] Run: docker network inspect old-k8s-version-881642 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 09:29:52.724413  182527 cli_runner.go:211] docker network inspect old-k8s-version-881642 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 09:29:52.724496  182527 network_create.go:284] running [docker network inspect old-k8s-version-881642] to gather additional debugging logs...
	I1025 09:29:52.724517  182527 cli_runner.go:164] Run: docker network inspect old-k8s-version-881642
	W1025 09:29:52.742179  182527 cli_runner.go:211] docker network inspect old-k8s-version-881642 returned with exit code 1
	I1025 09:29:52.742217  182527 network_create.go:287] error running [docker network inspect old-k8s-version-881642]: docker network inspect old-k8s-version-881642: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-881642 not found
	I1025 09:29:52.742236  182527 network_create.go:289] output of [docker network inspect old-k8s-version-881642]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-881642 not found
	
	** /stderr **
	I1025 09:29:52.742358  182527 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:29:52.767180  182527 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-4076b76bdd01 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5e:93:ad:e4:3e:11} reservation:<nil>}
	I1025 09:29:52.767549  182527 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ab40ae949743 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ea:83:23:78:ca:4d} reservation:<nil>}
	I1025 09:29:52.767873  182527 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ff3fdd90dcc2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8e:d4:a3:43:c3:da} reservation:<nil>}
	I1025 09:29:52.768362  182527 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400196b490}
	I1025 09:29:52.768387  182527 network_create.go:124] attempt to create docker network old-k8s-version-881642 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1025 09:29:52.768443  182527 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-881642 old-k8s-version-881642
	I1025 09:29:52.830205  182527 network_create.go:108] docker network old-k8s-version-881642 192.168.76.0/24 created
	I1025 09:29:52.830238  182527 kic.go:121] calculated static IP "192.168.76.2" for the "old-k8s-version-881642" container
	I1025 09:29:52.830326  182527 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 09:29:52.848730  182527 cli_runner.go:164] Run: docker volume create old-k8s-version-881642 --label name.minikube.sigs.k8s.io=old-k8s-version-881642 --label created_by.minikube.sigs.k8s.io=true
	I1025 09:29:52.867939  182527 oci.go:103] Successfully created a docker volume old-k8s-version-881642
	I1025 09:29:52.868022  182527 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-881642-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-881642 --entrypoint /usr/bin/test -v old-k8s-version-881642:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1025 09:29:53.394960  182527 oci.go:107] Successfully prepared a docker volume old-k8s-version-881642
	I1025 09:29:53.395012  182527 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1025 09:29:53.395032  182527 kic.go:194] Starting extracting preloaded images to volume ...
	I1025 09:29:53.395106  182527 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21796-2312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-881642:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1025 09:29:58.350917  182527 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21796-2312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-881642:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.955771658s)
	I1025 09:29:58.350947  182527 kic.go:203] duration metric: took 4.955912674s to extract preloaded images to volume ...
	W1025 09:29:58.351095  182527 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1025 09:29:58.351228  182527 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 09:29:58.416843  182527 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-881642 --name old-k8s-version-881642 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-881642 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-881642 --network old-k8s-version-881642 --ip 192.168.76.2 --volume old-k8s-version-881642:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1025 09:29:58.820512  182527 cli_runner.go:164] Run: docker container inspect old-k8s-version-881642 --format={{.State.Running}}
	I1025 09:29:58.843598  182527 cli_runner.go:164] Run: docker container inspect old-k8s-version-881642 --format={{.State.Status}}
	I1025 09:29:58.863820  182527 cli_runner.go:164] Run: docker exec old-k8s-version-881642 stat /var/lib/dpkg/alternatives/iptables
	I1025 09:29:58.931338  182527 oci.go:144] the created container "old-k8s-version-881642" has a running status.
	I1025 09:29:58.931374  182527 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21796-2312/.minikube/machines/old-k8s-version-881642/id_rsa...
	I1025 09:30:00.552571  182527 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21796-2312/.minikube/machines/old-k8s-version-881642/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 09:30:00.611923  182527 cli_runner.go:164] Run: docker container inspect old-k8s-version-881642 --format={{.State.Status}}
	I1025 09:30:00.636228  182527 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 09:30:00.636369  182527 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-881642 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 09:30:00.704735  182527 cli_runner.go:164] Run: docker container inspect old-k8s-version-881642 --format={{.State.Status}}
	I1025 09:30:00.727175  182527 machine.go:93] provisionDockerMachine start ...
	I1025 09:30:00.727297  182527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-881642
	I1025 09:30:00.749076  182527 main.go:141] libmachine: Using SSH client type: native
	I1025 09:30:00.749459  182527 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33048 <nil> <nil>}
	I1025 09:30:00.749476  182527 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:30:00.914226  182527 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-881642
	
	I1025 09:30:00.914256  182527 ubuntu.go:182] provisioning hostname "old-k8s-version-881642"
	I1025 09:30:00.914344  182527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-881642
	I1025 09:30:00.934585  182527 main.go:141] libmachine: Using SSH client type: native
	I1025 09:30:00.934939  182527 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33048 <nil> <nil>}
	I1025 09:30:00.934965  182527 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-881642 && echo "old-k8s-version-881642" | sudo tee /etc/hostname
	I1025 09:30:01.123247  182527 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-881642
	
	I1025 09:30:01.123333  182527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-881642
	I1025 09:30:01.144327  182527 main.go:141] libmachine: Using SSH client type: native
	I1025 09:30:01.144672  182527 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33048 <nil> <nil>}
	I1025 09:30:01.144691  182527 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-881642' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-881642/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-881642' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:30:01.326900  182527 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:30:01.326972  182527 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21796-2312/.minikube CaCertPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21796-2312/.minikube}
	I1025 09:30:01.327042  182527 ubuntu.go:190] setting up certificates
	I1025 09:30:01.327076  182527 provision.go:84] configureAuth start
	I1025 09:30:01.327167  182527 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-881642
	I1025 09:30:01.349688  182527 provision.go:143] copyHostCerts
	I1025 09:30:01.349773  182527 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-2312/.minikube/ca.pem, removing ...
	I1025 09:30:01.349784  182527 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-2312/.minikube/ca.pem
	I1025 09:30:01.349872  182527 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21796-2312/.minikube/ca.pem (1082 bytes)
	I1025 09:30:01.350046  182527 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-2312/.minikube/cert.pem, removing ...
	I1025 09:30:01.350056  182527 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-2312/.minikube/cert.pem
	I1025 09:30:01.350094  182527 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21796-2312/.minikube/cert.pem (1123 bytes)
	I1025 09:30:01.350191  182527 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-2312/.minikube/key.pem, removing ...
	I1025 09:30:01.350202  182527 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-2312/.minikube/key.pem
	I1025 09:30:01.350231  182527 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21796-2312/.minikube/key.pem (1679 bytes)
	I1025 09:30:01.350304  182527 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21796-2312/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-881642 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-881642]
	I1025 09:30:01.609928  182527 provision.go:177] copyRemoteCerts
	I1025 09:30:01.610033  182527 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:30:01.610083  182527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-881642
	I1025 09:30:01.631573  182527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/old-k8s-version-881642/id_rsa Username:docker}
	I1025 09:30:01.738508  182527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 09:30:01.760829  182527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1025 09:30:01.784190  182527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 09:30:01.804985  182527 provision.go:87] duration metric: took 477.872009ms to configureAuth
	I1025 09:30:01.805013  182527 ubuntu.go:206] setting minikube options for container-runtime
	I1025 09:30:01.805205  182527 config.go:182] Loaded profile config "old-k8s-version-881642": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1025 09:30:01.805318  182527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-881642
	I1025 09:30:01.823569  182527 main.go:141] libmachine: Using SSH client type: native
	I1025 09:30:01.823928  182527 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33048 <nil> <nil>}
	I1025 09:30:01.823947  182527 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 09:30:02.100437  182527 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:30:02.100457  182527 machine.go:96] duration metric: took 1.373256169s to provisionDockerMachine
	I1025 09:30:02.100466  182527 client.go:171] duration metric: took 9.392026798s to LocalClient.Create
	I1025 09:30:02.100481  182527 start.go:167] duration metric: took 9.392103197s to libmachine.API.Create "old-k8s-version-881642"
	I1025 09:30:02.100488  182527 start.go:293] postStartSetup for "old-k8s-version-881642" (driver="docker")
	I1025 09:30:02.100499  182527 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:30:02.100561  182527 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:30:02.100619  182527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-881642
	I1025 09:30:02.124995  182527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/old-k8s-version-881642/id_rsa Username:docker}
	I1025 09:30:02.234239  182527 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:30:02.237535  182527 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 09:30:02.237575  182527 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 09:30:02.237587  182527 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-2312/.minikube/addons for local assets ...
	I1025 09:30:02.237644  182527 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-2312/.minikube/files for local assets ...
	I1025 09:30:02.237725  182527 filesync.go:149] local asset: /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem -> 41102.pem in /etc/ssl/certs
	I1025 09:30:02.237832  182527 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 09:30:02.245380  182527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem --> /etc/ssl/certs/41102.pem (1708 bytes)
	I1025 09:30:02.264294  182527 start.go:296] duration metric: took 163.79092ms for postStartSetup
	I1025 09:30:02.264668  182527 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-881642
	I1025 09:30:02.282114  182527 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/old-k8s-version-881642/config.json ...
	I1025 09:30:02.282398  182527 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:30:02.282437  182527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-881642
	I1025 09:30:02.300279  182527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/old-k8s-version-881642/id_rsa Username:docker}
	I1025 09:30:02.403016  182527 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 09:30:02.410141  182527 start.go:128] duration metric: took 9.705350773s to createHost
	I1025 09:30:02.410165  182527 start.go:83] releasing machines lock for "old-k8s-version-881642", held for 9.70548287s
	I1025 09:30:02.410237  182527 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-881642
	I1025 09:30:02.427343  182527 ssh_runner.go:195] Run: cat /version.json
	I1025 09:30:02.427389  182527 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:30:02.427395  182527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-881642
	I1025 09:30:02.427442  182527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-881642
	I1025 09:30:02.447579  182527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/old-k8s-version-881642/id_rsa Username:docker}
	I1025 09:30:02.463628  182527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/old-k8s-version-881642/id_rsa Username:docker}
	I1025 09:30:02.553711  182527 ssh_runner.go:195] Run: systemctl --version
	I1025 09:30:02.645378  182527 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:30:02.680904  182527 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:30:02.685346  182527 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:30:02.685426  182527 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:30:02.712282  182527 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1025 09:30:02.712308  182527 start.go:495] detecting cgroup driver to use...
	I1025 09:30:02.712341  182527 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 09:30:02.712400  182527 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:30:02.733578  182527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:30:02.747005  182527 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:30:02.747080  182527 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:30:02.771042  182527 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:30:02.790304  182527 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:30:02.915130  182527 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:30:03.061332  182527 docker.go:234] disabling docker service ...
	I1025 09:30:03.061406  182527 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:30:03.085742  182527 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:30:03.100783  182527 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:30:03.222948  182527 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:30:03.347624  182527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:30:03.360778  182527 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:30:03.377769  182527 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1025 09:30:03.377887  182527 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:30:03.387747  182527 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 09:30:03.387872  182527 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:30:03.396886  182527 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:30:03.405827  182527 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:30:03.415493  182527 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:30:03.424691  182527 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:30:03.434483  182527 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:30:03.448366  182527 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:30:03.456955  182527 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:30:03.464327  182527 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:30:03.471833  182527 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:30:03.584701  182527 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:30:03.717271  182527 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:30:03.717362  182527 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:30:03.723598  182527 start.go:563] Will wait 60s for crictl version
	I1025 09:30:03.723659  182527 ssh_runner.go:195] Run: which crictl
	I1025 09:30:03.727428  182527 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 09:30:03.762871  182527 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 09:30:03.762953  182527 ssh_runner.go:195] Run: crio --version
	I1025 09:30:03.793036  182527 ssh_runner.go:195] Run: crio --version
	I1025 09:30:03.825790  182527 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1025 09:30:03.828608  182527 cli_runner.go:164] Run: docker network inspect old-k8s-version-881642 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:30:03.844985  182527 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1025 09:30:03.848936  182527 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:30:03.858670  182527 kubeadm.go:883] updating cluster {Name:old-k8s-version-881642 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-881642 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 09:30:03.858777  182527 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1025 09:30:03.858836  182527 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:30:03.889748  182527 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:30:03.889771  182527 crio.go:433] Images already preloaded, skipping extraction
	I1025 09:30:03.889826  182527 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:30:03.916280  182527 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:30:03.916301  182527 cache_images.go:85] Images are preloaded, skipping loading
	I1025 09:30:03.916309  182527 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1025 09:30:03.916391  182527 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-881642 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-881642 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 09:30:03.916485  182527 ssh_runner.go:195] Run: crio config
	I1025 09:30:03.989695  182527 cni.go:84] Creating CNI manager for ""
	I1025 09:30:03.989721  182527 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:30:03.989743  182527 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 09:30:03.989766  182527 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-881642 NodeName:old-k8s-version-881642 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:30:03.989901  182527 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-881642"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:30:03.989970  182527 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1025 09:30:03.998660  182527 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 09:30:03.998739  182527 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:30:04.009183  182527 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1025 09:30:04.024218  182527 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:30:04.039057  182527 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1025 09:30:04.079449  182527 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1025 09:30:04.083853  182527 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:30:04.094153  182527 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:30:04.219721  182527 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:30:04.236000  182527 certs.go:69] Setting up /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/old-k8s-version-881642 for IP: 192.168.76.2
	I1025 09:30:04.236075  182527 certs.go:195] generating shared ca certs ...
	I1025 09:30:04.236106  182527 certs.go:227] acquiring lock for ca certs: {Name:mke81a91ad41248e67f7acba9fb2e71e1c110e7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:30:04.236284  182527 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21796-2312/.minikube/ca.key
	I1025 09:30:04.236351  182527 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.key
	I1025 09:30:04.236374  182527 certs.go:257] generating profile certs ...
	I1025 09:30:04.236461  182527 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/old-k8s-version-881642/client.key
	I1025 09:30:04.236495  182527 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/old-k8s-version-881642/client.crt with IP's: []
	I1025 09:30:04.567424  182527 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/old-k8s-version-881642/client.crt ...
	I1025 09:30:04.567456  182527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/old-k8s-version-881642/client.crt: {Name:mkfd0ca60f8337c8910c4f182342c7d3161edee4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:30:04.567659  182527 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/old-k8s-version-881642/client.key ...
	I1025 09:30:04.567675  182527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/old-k8s-version-881642/client.key: {Name:mkdcec1f3c21cb36a521c74a74b684e7775ab3b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:30:04.567772  182527 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/old-k8s-version-881642/apiserver.key.47999f4d
	I1025 09:30:04.567791  182527 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/old-k8s-version-881642/apiserver.crt.47999f4d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1025 09:30:05.048241  182527 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/old-k8s-version-881642/apiserver.crt.47999f4d ...
	I1025 09:30:05.048271  182527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/old-k8s-version-881642/apiserver.crt.47999f4d: {Name:mkea3774091163a79e3bc79047a79ad905560530 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:30:05.048454  182527 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/old-k8s-version-881642/apiserver.key.47999f4d ...
	I1025 09:30:05.048470  182527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/old-k8s-version-881642/apiserver.key.47999f4d: {Name:mkcd4f525abfc77c411f36a51f47ffb13cae508a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:30:05.048558  182527 certs.go:382] copying /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/old-k8s-version-881642/apiserver.crt.47999f4d -> /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/old-k8s-version-881642/apiserver.crt
	I1025 09:30:05.048640  182527 certs.go:386] copying /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/old-k8s-version-881642/apiserver.key.47999f4d -> /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/old-k8s-version-881642/apiserver.key
	I1025 09:30:05.048708  182527 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/old-k8s-version-881642/proxy-client.key
	I1025 09:30:05.048728  182527 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/old-k8s-version-881642/proxy-client.crt with IP's: []
	I1025 09:30:05.164049  182527 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/old-k8s-version-881642/proxy-client.crt ...
	I1025 09:30:05.164078  182527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/old-k8s-version-881642/proxy-client.crt: {Name:mk3e13337aa9db6bf258754a298878756d45863c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:30:05.164248  182527 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/old-k8s-version-881642/proxy-client.key ...
	I1025 09:30:05.164263  182527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/old-k8s-version-881642/proxy-client.key: {Name:mk29442785906f51c26e2efbf4810762f819c20b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:30:05.164451  182527 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/4110.pem (1338 bytes)
	W1025 09:30:05.164500  182527 certs.go:480] ignoring /home/jenkins/minikube-integration/21796-2312/.minikube/certs/4110_empty.pem, impossibly tiny 0 bytes
	I1025 09:30:05.164517  182527 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 09:30:05.164543  182527 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem (1082 bytes)
	I1025 09:30:05.164570  182527 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:30:05.164592  182527 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/key.pem (1679 bytes)
	I1025 09:30:05.164643  182527 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem (1708 bytes)
	I1025 09:30:05.165209  182527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:30:05.183248  182527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 09:30:05.201994  182527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:30:05.219511  182527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 09:30:05.236874  182527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/old-k8s-version-881642/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1025 09:30:05.255782  182527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/old-k8s-version-881642/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 09:30:05.273795  182527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/old-k8s-version-881642/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:30:05.291549  182527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/old-k8s-version-881642/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1025 09:30:05.310328  182527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem --> /usr/share/ca-certificates/41102.pem (1708 bytes)
	I1025 09:30:05.327807  182527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:30:05.345232  182527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/certs/4110.pem --> /usr/share/ca-certificates/4110.pem (1338 bytes)
	I1025 09:30:05.363756  182527 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:30:05.376707  182527 ssh_runner.go:195] Run: openssl version
	I1025 09:30:05.383260  182527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41102.pem && ln -fs /usr/share/ca-certificates/41102.pem /etc/ssl/certs/41102.pem"
	I1025 09:30:05.391984  182527 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41102.pem
	I1025 09:30:05.395811  182527 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 08:37 /usr/share/ca-certificates/41102.pem
	I1025 09:30:05.395882  182527 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41102.pem
	I1025 09:30:05.437355  182527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41102.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 09:30:05.445855  182527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:30:05.454371  182527 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:30:05.458338  182527 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:30:05.458400  182527 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:30:05.504418  182527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:30:05.512788  182527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4110.pem && ln -fs /usr/share/ca-certificates/4110.pem /etc/ssl/certs/4110.pem"
	I1025 09:30:05.521128  182527 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4110.pem
	I1025 09:30:05.525030  182527 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 08:37 /usr/share/ca-certificates/4110.pem
	I1025 09:30:05.525134  182527 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4110.pem
	I1025 09:30:05.566325  182527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4110.pem /etc/ssl/certs/51391683.0"
	I1025 09:30:05.574786  182527 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:30:05.578354  182527 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 09:30:05.578422  182527 kubeadm.go:400] StartCluster: {Name:old-k8s-version-881642 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-881642 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:30:05.578503  182527 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:30:05.578565  182527 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:30:05.606027  182527 cri.go:89] found id: ""
	I1025 09:30:05.606100  182527 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 09:30:05.614286  182527 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 09:30:05.622507  182527 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1025 09:30:05.622644  182527 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 09:30:05.630827  182527 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 09:30:05.630889  182527 kubeadm.go:157] found existing configuration files:
	
	I1025 09:30:05.630950  182527 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 09:30:05.638704  182527 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 09:30:05.638780  182527 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 09:30:05.646177  182527 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 09:30:05.653869  182527 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 09:30:05.653934  182527 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 09:30:05.661614  182527 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 09:30:05.669030  182527 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 09:30:05.669097  182527 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 09:30:05.676330  182527 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 09:30:05.683906  182527 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 09:30:05.683978  182527 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 09:30:05.691826  182527 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 09:30:05.736898  182527 kubeadm.go:318] [init] Using Kubernetes version: v1.28.0
	I1025 09:30:05.737166  182527 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 09:30:05.776637  182527 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1025 09:30:05.776804  182527 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1025 09:30:05.776877  182527 kubeadm.go:318] OS: Linux
	I1025 09:30:05.776957  182527 kubeadm.go:318] CGROUPS_CPU: enabled
	I1025 09:30:05.777042  182527 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1025 09:30:05.777148  182527 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1025 09:30:05.777245  182527 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1025 09:30:05.777324  182527 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1025 09:30:05.777427  182527 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1025 09:30:05.777500  182527 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1025 09:30:05.777580  182527 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1025 09:30:05.777661  182527 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1025 09:30:05.870167  182527 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 09:30:05.870348  182527 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 09:30:05.870480  182527 kubeadm.go:318] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1025 09:30:06.074430  182527 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 09:30:06.080530  182527 out.go:252]   - Generating certificates and keys ...
	I1025 09:30:06.080703  182527 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 09:30:06.080826  182527 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 09:30:06.859048  182527 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 09:30:07.116946  182527 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 09:30:07.697778  182527 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 09:30:09.141701  182527 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 09:30:09.257808  182527 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 09:30:09.258332  182527 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-881642] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1025 09:30:09.525037  182527 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 09:30:09.525414  182527 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-881642] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1025 09:30:09.931542  182527 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 09:30:10.437085  182527 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 09:30:10.936748  182527 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 09:30:10.937019  182527 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 09:30:11.061800  182527 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 09:30:11.728959  182527 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 09:30:12.053590  182527 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 09:30:12.600574  182527 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 09:30:12.601475  182527 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 09:30:12.604299  182527 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 09:30:12.607715  182527 out.go:252]   - Booting up control plane ...
	I1025 09:30:12.607849  182527 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 09:30:12.607938  182527 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 09:30:12.608039  182527 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 09:30:12.637415  182527 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 09:30:12.637941  182527 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 09:30:12.638318  182527 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1025 09:30:12.764756  182527 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1025 09:30:20.266506  182527 kubeadm.go:318] [apiclient] All control plane components are healthy after 7.502960 seconds
	I1025 09:30:20.266639  182527 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 09:30:20.286266  182527 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 09:30:20.814556  182527 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 09:30:20.814777  182527 kubeadm.go:318] [mark-control-plane] Marking the node old-k8s-version-881642 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 09:30:21.328954  182527 kubeadm.go:318] [bootstrap-token] Using token: ylf9ge.s3p1yuiz7gppdkt8
	I1025 09:30:21.332027  182527 out.go:252]   - Configuring RBAC rules ...
	I1025 09:30:21.332156  182527 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 09:30:21.336984  182527 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 09:30:21.349733  182527 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 09:30:21.354522  182527 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 09:30:21.359462  182527 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 09:30:21.363474  182527 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 09:30:21.379984  182527 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 09:30:21.692978  182527 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1025 09:30:21.747753  182527 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1025 09:30:21.749487  182527 kubeadm.go:318] 
	I1025 09:30:21.749570  182527 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1025 09:30:21.749582  182527 kubeadm.go:318] 
	I1025 09:30:21.749664  182527 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1025 09:30:21.749674  182527 kubeadm.go:318] 
	I1025 09:30:21.749701  182527 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1025 09:30:21.749767  182527 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 09:30:21.749824  182527 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 09:30:21.749832  182527 kubeadm.go:318] 
	I1025 09:30:21.749888  182527 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1025 09:30:21.749893  182527 kubeadm.go:318] 
	I1025 09:30:21.749943  182527 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 09:30:21.749947  182527 kubeadm.go:318] 
	I1025 09:30:21.750057  182527 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1025 09:30:21.750137  182527 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 09:30:21.750214  182527 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 09:30:21.750221  182527 kubeadm.go:318] 
	I1025 09:30:21.750308  182527 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 09:30:21.750388  182527 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1025 09:30:21.750394  182527 kubeadm.go:318] 
	I1025 09:30:21.750482  182527 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token ylf9ge.s3p1yuiz7gppdkt8 \
	I1025 09:30:21.750591  182527 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:e54eda66e4b00d0a990c315bf332782d8922fe6750f8b2bc2791b23f9457095b \
	I1025 09:30:21.750613  182527 kubeadm.go:318] 	--control-plane 
	I1025 09:30:21.750618  182527 kubeadm.go:318] 
	I1025 09:30:21.750711  182527 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1025 09:30:21.750716  182527 kubeadm.go:318] 
	I1025 09:30:21.750802  182527 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token ylf9ge.s3p1yuiz7gppdkt8 \
	I1025 09:30:21.750910  182527 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:e54eda66e4b00d0a990c315bf332782d8922fe6750f8b2bc2791b23f9457095b 
	I1025 09:30:21.761268  182527 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1025 09:30:21.761398  182527 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 09:30:21.761414  182527 cni.go:84] Creating CNI manager for ""
	I1025 09:30:21.761422  182527 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:30:21.766466  182527 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1025 09:30:21.769315  182527 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 09:30:21.787186  182527 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1025 09:30:21.787206  182527 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1025 09:30:21.829585  182527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 09:30:22.870699  182527 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.041082098s)
	I1025 09:30:22.870738  182527 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 09:30:22.870836  182527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:30:22.870857  182527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-881642 minikube.k8s.io/updated_at=2025_10_25T09_30_22_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c3de5ce1075e776fa55a26fe8396669cc53a4373 minikube.k8s.io/name=old-k8s-version-881642 minikube.k8s.io/primary=true
	I1025 09:30:23.046499  182527 ops.go:34] apiserver oom_adj: -16
	I1025 09:30:23.046604  182527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:30:23.547192  182527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:30:24.046738  182527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:30:24.546667  182527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:30:25.047120  182527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:30:25.547459  182527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:30:26.046913  182527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:30:26.547019  182527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:30:27.047416  182527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:30:27.547632  182527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:30:28.047187  182527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:30:28.546767  182527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:30:29.047549  182527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:30:29.546945  182527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:30:30.056823  182527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:30:30.547578  182527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:30:31.047654  182527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:30:31.547222  182527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:30:32.047437  182527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:30:32.547098  182527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:30:33.047489  182527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:30:33.546713  182527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:30:34.047202  182527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:30:34.182539  182527 kubeadm.go:1113] duration metric: took 11.311762911s to wait for elevateKubeSystemPrivileges
	I1025 09:30:34.182573  182527 kubeadm.go:402] duration metric: took 28.604154697s to StartCluster
	I1025 09:30:34.182590  182527 settings.go:142] acquiring lock: {Name:mk0d8a31751f5e359928da7d7271367bd4b397fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:30:34.182661  182527 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21796-2312/kubeconfig
	I1025 09:30:34.183651  182527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/kubeconfig: {Name:mk29749a0edbb941c188588ea6aef25a517ab571 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:30:34.183872  182527 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 09:30:34.183879  182527 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:30:34.184130  182527 config.go:182] Loaded profile config "old-k8s-version-881642": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1025 09:30:34.184169  182527 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 09:30:34.184230  182527 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-881642"
	I1025 09:30:34.184244  182527 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-881642"
	I1025 09:30:34.184266  182527 host.go:66] Checking if "old-k8s-version-881642" exists ...
	I1025 09:30:34.184732  182527 cli_runner.go:164] Run: docker container inspect old-k8s-version-881642 --format={{.State.Status}}
	I1025 09:30:34.185252  182527 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-881642"
	I1025 09:30:34.185279  182527 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-881642"
	I1025 09:30:34.185554  182527 cli_runner.go:164] Run: docker container inspect old-k8s-version-881642 --format={{.State.Status}}
	I1025 09:30:34.187957  182527 out.go:179] * Verifying Kubernetes components...
	I1025 09:30:34.190936  182527 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:30:34.222575  182527 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-881642"
	I1025 09:30:34.222613  182527 host.go:66] Checking if "old-k8s-version-881642" exists ...
	I1025 09:30:34.223045  182527 cli_runner.go:164] Run: docker container inspect old-k8s-version-881642 --format={{.State.Status}}
	I1025 09:30:34.238675  182527 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 09:30:34.241648  182527 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:30:34.241671  182527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 09:30:34.241736  182527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-881642
	I1025 09:30:34.264909  182527 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 09:30:34.264930  182527 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 09:30:34.265132  182527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-881642
	I1025 09:30:34.293691  182527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/old-k8s-version-881642/id_rsa Username:docker}
	I1025 09:30:34.320094  182527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/old-k8s-version-881642/id_rsa Username:docker}
	I1025 09:30:34.511071  182527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:30:34.529611  182527 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 09:30:34.529809  182527 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:30:34.592263  182527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 09:30:35.798187  182527 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.287032624s)
	I1025 09:30:35.798267  182527 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.268394811s)
	I1025 09:30:35.799145  182527 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-881642" to be "Ready" ...
	I1025 09:30:35.799468  182527 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.269778141s)
	I1025 09:30:35.799487  182527 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1025 09:30:35.800596  182527 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.208309433s)
	I1025 09:30:35.855554  182527 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1025 09:30:35.857788  182527 addons.go:514] duration metric: took 1.673596723s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1025 09:30:36.303129  182527 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-881642" context rescaled to 1 replicas
	W1025 09:30:37.802588  182527 node_ready.go:57] node "old-k8s-version-881642" has "Ready":"False" status (will retry)
	W1025 09:30:40.302782  182527 node_ready.go:57] node "old-k8s-version-881642" has "Ready":"False" status (will retry)
	W1025 09:30:42.303131  182527 node_ready.go:57] node "old-k8s-version-881642" has "Ready":"False" status (will retry)
	W1025 09:30:44.802842  182527 node_ready.go:57] node "old-k8s-version-881642" has "Ready":"False" status (will retry)
	W1025 09:30:46.803146  182527 node_ready.go:57] node "old-k8s-version-881642" has "Ready":"False" status (will retry)
	I1025 09:30:48.303143  182527 node_ready.go:49] node "old-k8s-version-881642" is "Ready"
	I1025 09:30:48.303172  182527 node_ready.go:38] duration metric: took 12.504000852s for node "old-k8s-version-881642" to be "Ready" ...
	I1025 09:30:48.303186  182527 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:30:48.303245  182527 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:30:48.324622  182527 api_server.go:72] duration metric: took 14.140716887s to wait for apiserver process to appear ...
	I1025 09:30:48.324645  182527 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:30:48.324665  182527 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1025 09:30:48.338329  182527 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1025 09:30:48.339994  182527 api_server.go:141] control plane version: v1.28.0
	I1025 09:30:48.340015  182527 api_server.go:131] duration metric: took 15.363187ms to wait for apiserver health ...
	I1025 09:30:48.340023  182527 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 09:30:48.346087  182527 system_pods.go:59] 8 kube-system pods found
	I1025 09:30:48.346125  182527 system_pods.go:61] "coredns-5dd5756b68-jsvbf" [b48d8cb0-1b9b-47b6-9978-d0999af63891] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:30:48.346132  182527 system_pods.go:61] "etcd-old-k8s-version-881642" [8c68d6f9-0ea0-41c4-adb5-f6fbd06b4cee] Running
	I1025 09:30:48.346140  182527 system_pods.go:61] "kindnet-nvxh8" [5b2c497b-fba6-4404-ae57-a77af6bb6247] Running
	I1025 09:30:48.346145  182527 system_pods.go:61] "kube-apiserver-old-k8s-version-881642" [cd7ab399-8e19-4499-b362-c52482c21638] Running
	I1025 09:30:48.346150  182527 system_pods.go:61] "kube-controller-manager-old-k8s-version-881642" [96995e7f-0862-4aaa-89e3-9151562d7cd9] Running
	I1025 09:30:48.346154  182527 system_pods.go:61] "kube-proxy-6929r" [d632e328-6851-4ff1-95d9-839150635fbe] Running
	I1025 09:30:48.346159  182527 system_pods.go:61] "kube-scheduler-old-k8s-version-881642" [b7a77731-5903-4965-95fe-1150b120fabb] Running
	I1025 09:30:48.346164  182527 system_pods.go:61] "storage-provisioner" [ad9a123b-2478-4593-a68f-18ba44c87403] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:30:48.346170  182527 system_pods.go:74] duration metric: took 6.141169ms to wait for pod list to return data ...
	I1025 09:30:48.346178  182527 default_sa.go:34] waiting for default service account to be created ...
	I1025 09:30:48.349970  182527 default_sa.go:45] found service account: "default"
	I1025 09:30:48.350010  182527 default_sa.go:55] duration metric: took 3.825935ms for default service account to be created ...
	I1025 09:30:48.350020  182527 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 09:30:48.360420  182527 system_pods.go:86] 8 kube-system pods found
	I1025 09:30:48.360495  182527 system_pods.go:89] "coredns-5dd5756b68-jsvbf" [b48d8cb0-1b9b-47b6-9978-d0999af63891] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:30:48.360505  182527 system_pods.go:89] "etcd-old-k8s-version-881642" [8c68d6f9-0ea0-41c4-adb5-f6fbd06b4cee] Running
	I1025 09:30:48.360512  182527 system_pods.go:89] "kindnet-nvxh8" [5b2c497b-fba6-4404-ae57-a77af6bb6247] Running
	I1025 09:30:48.360517  182527 system_pods.go:89] "kube-apiserver-old-k8s-version-881642" [cd7ab399-8e19-4499-b362-c52482c21638] Running
	I1025 09:30:48.360522  182527 system_pods.go:89] "kube-controller-manager-old-k8s-version-881642" [96995e7f-0862-4aaa-89e3-9151562d7cd9] Running
	I1025 09:30:48.360525  182527 system_pods.go:89] "kube-proxy-6929r" [d632e328-6851-4ff1-95d9-839150635fbe] Running
	I1025 09:30:48.360529  182527 system_pods.go:89] "kube-scheduler-old-k8s-version-881642" [b7a77731-5903-4965-95fe-1150b120fabb] Running
	I1025 09:30:48.360535  182527 system_pods.go:89] "storage-provisioner" [ad9a123b-2478-4593-a68f-18ba44c87403] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:30:48.360585  182527 retry.go:31] will retry after 297.863311ms: missing components: kube-dns
	I1025 09:30:48.681805  182527 system_pods.go:86] 8 kube-system pods found
	I1025 09:30:48.681837  182527 system_pods.go:89] "coredns-5dd5756b68-jsvbf" [b48d8cb0-1b9b-47b6-9978-d0999af63891] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:30:48.681853  182527 system_pods.go:89] "etcd-old-k8s-version-881642" [8c68d6f9-0ea0-41c4-adb5-f6fbd06b4cee] Running
	I1025 09:30:48.681863  182527 system_pods.go:89] "kindnet-nvxh8" [5b2c497b-fba6-4404-ae57-a77af6bb6247] Running
	I1025 09:30:48.681868  182527 system_pods.go:89] "kube-apiserver-old-k8s-version-881642" [cd7ab399-8e19-4499-b362-c52482c21638] Running
	I1025 09:30:48.681872  182527 system_pods.go:89] "kube-controller-manager-old-k8s-version-881642" [96995e7f-0862-4aaa-89e3-9151562d7cd9] Running
	I1025 09:30:48.681876  182527 system_pods.go:89] "kube-proxy-6929r" [d632e328-6851-4ff1-95d9-839150635fbe] Running
	I1025 09:30:48.681880  182527 system_pods.go:89] "kube-scheduler-old-k8s-version-881642" [b7a77731-5903-4965-95fe-1150b120fabb] Running
	I1025 09:30:48.681885  182527 system_pods.go:89] "storage-provisioner" [ad9a123b-2478-4593-a68f-18ba44c87403] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:30:48.681899  182527 retry.go:31] will retry after 289.342802ms: missing components: kube-dns
	I1025 09:30:48.977235  182527 system_pods.go:86] 8 kube-system pods found
	I1025 09:30:48.977327  182527 system_pods.go:89] "coredns-5dd5756b68-jsvbf" [b48d8cb0-1b9b-47b6-9978-d0999af63891] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:30:48.977349  182527 system_pods.go:89] "etcd-old-k8s-version-881642" [8c68d6f9-0ea0-41c4-adb5-f6fbd06b4cee] Running
	I1025 09:30:48.977395  182527 system_pods.go:89] "kindnet-nvxh8" [5b2c497b-fba6-4404-ae57-a77af6bb6247] Running
	I1025 09:30:48.977421  182527 system_pods.go:89] "kube-apiserver-old-k8s-version-881642" [cd7ab399-8e19-4499-b362-c52482c21638] Running
	I1025 09:30:48.977448  182527 system_pods.go:89] "kube-controller-manager-old-k8s-version-881642" [96995e7f-0862-4aaa-89e3-9151562d7cd9] Running
	I1025 09:30:48.977486  182527 system_pods.go:89] "kube-proxy-6929r" [d632e328-6851-4ff1-95d9-839150635fbe] Running
	I1025 09:30:48.977513  182527 system_pods.go:89] "kube-scheduler-old-k8s-version-881642" [b7a77731-5903-4965-95fe-1150b120fabb] Running
	I1025 09:30:48.977543  182527 system_pods.go:89] "storage-provisioner" [ad9a123b-2478-4593-a68f-18ba44c87403] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:30:48.977587  182527 retry.go:31] will retry after 324.440554ms: missing components: kube-dns
	I1025 09:30:49.307469  182527 system_pods.go:86] 8 kube-system pods found
	I1025 09:30:49.307502  182527 system_pods.go:89] "coredns-5dd5756b68-jsvbf" [b48d8cb0-1b9b-47b6-9978-d0999af63891] Running
	I1025 09:30:49.307510  182527 system_pods.go:89] "etcd-old-k8s-version-881642" [8c68d6f9-0ea0-41c4-adb5-f6fbd06b4cee] Running
	I1025 09:30:49.307514  182527 system_pods.go:89] "kindnet-nvxh8" [5b2c497b-fba6-4404-ae57-a77af6bb6247] Running
	I1025 09:30:49.307519  182527 system_pods.go:89] "kube-apiserver-old-k8s-version-881642" [cd7ab399-8e19-4499-b362-c52482c21638] Running
	I1025 09:30:49.307524  182527 system_pods.go:89] "kube-controller-manager-old-k8s-version-881642" [96995e7f-0862-4aaa-89e3-9151562d7cd9] Running
	I1025 09:30:49.307528  182527 system_pods.go:89] "kube-proxy-6929r" [d632e328-6851-4ff1-95d9-839150635fbe] Running
	I1025 09:30:49.307532  182527 system_pods.go:89] "kube-scheduler-old-k8s-version-881642" [b7a77731-5903-4965-95fe-1150b120fabb] Running
	I1025 09:30:49.307536  182527 system_pods.go:89] "storage-provisioner" [ad9a123b-2478-4593-a68f-18ba44c87403] Running
	I1025 09:30:49.307544  182527 system_pods.go:126] duration metric: took 957.519038ms to wait for k8s-apps to be running ...
	I1025 09:30:49.307557  182527 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 09:30:49.307616  182527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:30:49.320902  182527 system_svc.go:56] duration metric: took 13.336022ms WaitForService to wait for kubelet
	I1025 09:30:49.320933  182527 kubeadm.go:586] duration metric: took 15.137030503s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:30:49.320955  182527 node_conditions.go:102] verifying NodePressure condition ...
	I1025 09:30:49.323983  182527 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 09:30:49.324018  182527 node_conditions.go:123] node cpu capacity is 2
	I1025 09:30:49.324032  182527 node_conditions.go:105] duration metric: took 3.071626ms to run NodePressure ...
	I1025 09:30:49.324044  182527 start.go:241] waiting for startup goroutines ...
	I1025 09:30:49.324052  182527 start.go:246] waiting for cluster config update ...
	I1025 09:30:49.324064  182527 start.go:255] writing updated cluster config ...
	I1025 09:30:49.324365  182527 ssh_runner.go:195] Run: rm -f paused
	I1025 09:30:49.328067  182527 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:30:49.332579  182527 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-jsvbf" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:30:49.338176  182527 pod_ready.go:94] pod "coredns-5dd5756b68-jsvbf" is "Ready"
	I1025 09:30:49.338250  182527 pod_ready.go:86] duration metric: took 5.643209ms for pod "coredns-5dd5756b68-jsvbf" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:30:49.341738  182527 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-881642" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:30:49.347656  182527 pod_ready.go:94] pod "etcd-old-k8s-version-881642" is "Ready"
	I1025 09:30:49.347682  182527 pod_ready.go:86] duration metric: took 5.913613ms for pod "etcd-old-k8s-version-881642" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:30:49.350969  182527 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-881642" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:30:49.355745  182527 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-881642" is "Ready"
	I1025 09:30:49.355772  182527 pod_ready.go:86] duration metric: took 4.77817ms for pod "kube-apiserver-old-k8s-version-881642" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:30:49.358825  182527 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-881642" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:30:49.732391  182527 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-881642" is "Ready"
	I1025 09:30:49.732422  182527 pod_ready.go:86] duration metric: took 373.569343ms for pod "kube-controller-manager-old-k8s-version-881642" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:30:49.933087  182527 pod_ready.go:83] waiting for pod "kube-proxy-6929r" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:30:50.332713  182527 pod_ready.go:94] pod "kube-proxy-6929r" is "Ready"
	I1025 09:30:50.332737  182527 pod_ready.go:86] duration metric: took 399.624136ms for pod "kube-proxy-6929r" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:30:50.532599  182527 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-881642" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:30:50.931948  182527 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-881642" is "Ready"
	I1025 09:30:50.931976  182527 pod_ready.go:86] duration metric: took 399.349243ms for pod "kube-scheduler-old-k8s-version-881642" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:30:50.931989  182527 pod_ready.go:40] duration metric: took 1.603889163s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:30:50.988921  182527 start.go:624] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1025 09:30:50.992355  182527 out.go:203] 
	W1025 09:30:50.995295  182527 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1025 09:30:50.998263  182527 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1025 09:30:51.005077  182527 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-881642" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 25 09:30:48 old-k8s-version-881642 crio[841]: time="2025-10-25T09:30:48.668970913Z" level=info msg="Created container b71c1fa1c44c28fd140b67c656b0e3bc4c92f550b370a1dfc85d94af78ee9d7b: kube-system/coredns-5dd5756b68-jsvbf/coredns" id=d0a7052e-9d8c-47aa-b39e-11bb5e836435 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:30:48 old-k8s-version-881642 crio[841]: time="2025-10-25T09:30:48.669692352Z" level=info msg="Starting container: b71c1fa1c44c28fd140b67c656b0e3bc4c92f550b370a1dfc85d94af78ee9d7b" id=01cadfe2-5ff5-4dae-b3f9-c34d34a93614 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:30:48 old-k8s-version-881642 crio[841]: time="2025-10-25T09:30:48.674643013Z" level=info msg="Started container" PID=1937 containerID=b71c1fa1c44c28fd140b67c656b0e3bc4c92f550b370a1dfc85d94af78ee9d7b description=kube-system/coredns-5dd5756b68-jsvbf/coredns id=01cadfe2-5ff5-4dae-b3f9-c34d34a93614 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9c9187f0d27adc1d8ccc0d093542d144497888786a50eb2fd96fdedcb771df31
	Oct 25 09:30:51 old-k8s-version-881642 crio[841]: time="2025-10-25T09:30:51.568862905Z" level=info msg="Running pod sandbox: default/busybox/POD" id=c70f280e-c977-49b0-b3c8-f099f13528eb name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:30:51 old-k8s-version-881642 crio[841]: time="2025-10-25T09:30:51.568936088Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:30:51 old-k8s-version-881642 crio[841]: time="2025-10-25T09:30:51.574309467Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:49f69cc76f17215ac7e84e751b458f574a3eada944fc327bdda31080d55c90bb UID:e0d22b63-119d-4a6a-aa7a-2f343c65f609 NetNS:/var/run/netns/6d9f3f7b-770e-4344-9d87-f46ed3099a1f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000497980}] Aliases:map[]}"
	Oct 25 09:30:51 old-k8s-version-881642 crio[841]: time="2025-10-25T09:30:51.574472661Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 25 09:30:51 old-k8s-version-881642 crio[841]: time="2025-10-25T09:30:51.585638666Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:49f69cc76f17215ac7e84e751b458f574a3eada944fc327bdda31080d55c90bb UID:e0d22b63-119d-4a6a-aa7a-2f343c65f609 NetNS:/var/run/netns/6d9f3f7b-770e-4344-9d87-f46ed3099a1f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000497980}] Aliases:map[]}"
	Oct 25 09:30:51 old-k8s-version-881642 crio[841]: time="2025-10-25T09:30:51.586236197Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 25 09:30:51 old-k8s-version-881642 crio[841]: time="2025-10-25T09:30:51.590441197Z" level=info msg="Ran pod sandbox 49f69cc76f17215ac7e84e751b458f574a3eada944fc327bdda31080d55c90bb with infra container: default/busybox/POD" id=c70f280e-c977-49b0-b3c8-f099f13528eb name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:30:51 old-k8s-version-881642 crio[841]: time="2025-10-25T09:30:51.592673927Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=74a7e9e8-1636-46c8-99a6-8bd67831eb91 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:30:51 old-k8s-version-881642 crio[841]: time="2025-10-25T09:30:51.592789991Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=74a7e9e8-1636-46c8-99a6-8bd67831eb91 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:30:51 old-k8s-version-881642 crio[841]: time="2025-10-25T09:30:51.592837458Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=74a7e9e8-1636-46c8-99a6-8bd67831eb91 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:30:51 old-k8s-version-881642 crio[841]: time="2025-10-25T09:30:51.5936932Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b6e579d4-411e-4654-99e8-b38ce89b8592 name=/runtime.v1.ImageService/PullImage
	Oct 25 09:30:51 old-k8s-version-881642 crio[841]: time="2025-10-25T09:30:51.59650993Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 25 09:30:53 old-k8s-version-881642 crio[841]: time="2025-10-25T09:30:53.614823499Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=b6e579d4-411e-4654-99e8-b38ce89b8592 name=/runtime.v1.ImageService/PullImage
	Oct 25 09:30:53 old-k8s-version-881642 crio[841]: time="2025-10-25T09:30:53.618092716Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=229c7331-cdb9-461f-8f95-5d12ce5addc2 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:30:53 old-k8s-version-881642 crio[841]: time="2025-10-25T09:30:53.61997783Z" level=info msg="Creating container: default/busybox/busybox" id=a1a052a4-a7d8-44cc-9ddd-74c3f231b1b8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:30:53 old-k8s-version-881642 crio[841]: time="2025-10-25T09:30:53.62009597Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:30:53 old-k8s-version-881642 crio[841]: time="2025-10-25T09:30:53.627840033Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:30:53 old-k8s-version-881642 crio[841]: time="2025-10-25T09:30:53.628335827Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:30:53 old-k8s-version-881642 crio[841]: time="2025-10-25T09:30:53.64353435Z" level=info msg="Created container 3927f254aa54f6575c850651b902a8c9a9f0637f11b02f76c582dcd3338a9137: default/busybox/busybox" id=a1a052a4-a7d8-44cc-9ddd-74c3f231b1b8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:30:53 old-k8s-version-881642 crio[841]: time="2025-10-25T09:30:53.644259374Z" level=info msg="Starting container: 3927f254aa54f6575c850651b902a8c9a9f0637f11b02f76c582dcd3338a9137" id=645ea419-154c-4778-b36f-71c37a9f498e name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:30:53 old-k8s-version-881642 crio[841]: time="2025-10-25T09:30:53.645884497Z" level=info msg="Started container" PID=1995 containerID=3927f254aa54f6575c850651b902a8c9a9f0637f11b02f76c582dcd3338a9137 description=default/busybox/busybox id=645ea419-154c-4778-b36f-71c37a9f498e name=/runtime.v1.RuntimeService/StartContainer sandboxID=49f69cc76f17215ac7e84e751b458f574a3eada944fc327bdda31080d55c90bb
	Oct 25 09:31:00 old-k8s-version-881642 crio[841]: time="2025-10-25T09:31:00.480841882Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	3927f254aa54f       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   49f69cc76f172       busybox                                          default
	b71c1fa1c44c2       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      13 seconds ago      Running             coredns                   0                   9c9187f0d27ad       coredns-5dd5756b68-jsvbf                         kube-system
	ebc0721a8869e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago      Running             storage-provisioner       0                   3ed3d65560b23       storage-provisioner                              kube-system
	5cba2c7e85b35       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    24 seconds ago      Running             kindnet-cni               0                   b1a67607debe6       kindnet-nvxh8                                    kube-system
	a325d7f3ab251       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                      27 seconds ago      Running             kube-proxy                0                   724aac56f8b81       kube-proxy-6929r                                 kube-system
	0d4bf523891d7       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                      47 seconds ago      Running             kube-controller-manager   0                   baa9eb61522fe       kube-controller-manager-old-k8s-version-881642   kube-system
	bcf4d71e6b1de       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      47 seconds ago      Running             etcd                      0                   f9ec58cb43cd5       etcd-old-k8s-version-881642                      kube-system
	f6de032446a37       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                      47 seconds ago      Running             kube-apiserver            0                   53d25ac784463       kube-apiserver-old-k8s-version-881642            kube-system
	1df38cff52812       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                      47 seconds ago      Running             kube-scheduler            0                   efa0b3a1ef4e4       kube-scheduler-old-k8s-version-881642            kube-system
	
	
	==> coredns [b71c1fa1c44c28fd140b67c656b0e3bc4c92f550b370a1dfc85d94af78ee9d7b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:57369 - 6500 "HINFO IN 4115913819535088235.6939632173696414492. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.005234095s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-881642
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-881642
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3de5ce1075e776fa55a26fe8396669cc53a4373
	                    minikube.k8s.io/name=old-k8s-version-881642
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_30_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:30:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-881642
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:30:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:30:52 +0000   Sat, 25 Oct 2025 09:30:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:30:52 +0000   Sat, 25 Oct 2025 09:30:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:30:52 +0000   Sat, 25 Oct 2025 09:30:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:30:52 +0000   Sat, 25 Oct 2025 09:30:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-881642
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                7da6e2e0-c6f7-4303-a7ca-65b12f9698fc
	  Boot ID:                    89aa6167-e700-462e-8969-7739a9f33dc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-5dd5756b68-jsvbf                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     28s
	  kube-system                 etcd-old-k8s-version-881642                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         40s
	  kube-system                 kindnet-nvxh8                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-old-k8s-version-881642             250m (12%)    0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-controller-manager-old-k8s-version-881642    200m (10%)    0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-proxy-6929r                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-old-k8s-version-881642             100m (5%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27s                kube-proxy       
	  Normal  Starting                 49s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  49s (x8 over 49s)  kubelet          Node old-k8s-version-881642 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    49s (x8 over 49s)  kubelet          Node old-k8s-version-881642 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     49s (x8 over 49s)  kubelet          Node old-k8s-version-881642 status is now: NodeHasSufficientPID
	  Normal  Starting                 41s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  41s                kubelet          Node old-k8s-version-881642 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s                kubelet          Node old-k8s-version-881642 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s                kubelet          Node old-k8s-version-881642 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s                node-controller  Node old-k8s-version-881642 event: Registered Node old-k8s-version-881642 in Controller
	  Normal  NodeReady                14s                kubelet          Node old-k8s-version-881642 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct25 09:02] overlayfs: idmapped layers are currently not supported
	[Oct25 09:07] overlayfs: idmapped layers are currently not supported
	[Oct25 09:08] overlayfs: idmapped layers are currently not supported
	[Oct25 09:09] overlayfs: idmapped layers are currently not supported
	[Oct25 09:10] overlayfs: idmapped layers are currently not supported
	[Oct25 09:11] overlayfs: idmapped layers are currently not supported
	[Oct25 09:13] overlayfs: idmapped layers are currently not supported
	[ +18.632418] overlayfs: idmapped layers are currently not supported
	[ +13.271347] overlayfs: idmapped layers are currently not supported
	[Oct25 09:14] overlayfs: idmapped layers are currently not supported
	[Oct25 09:15] overlayfs: idmapped layers are currently not supported
	[ +40.253798] overlayfs: idmapped layers are currently not supported
	[Oct25 09:16] overlayfs: idmapped layers are currently not supported
	[Oct25 09:17] overlayfs: idmapped layers are currently not supported
	[Oct25 09:18] overlayfs: idmapped layers are currently not supported
	[  +9.191019] hrtimer: interrupt took 13462946 ns
	[Oct25 09:20] overlayfs: idmapped layers are currently not supported
	[Oct25 09:22] overlayfs: idmapped layers are currently not supported
	[Oct25 09:23] overlayfs: idmapped layers are currently not supported
	[Oct25 09:24] overlayfs: idmapped layers are currently not supported
	[Oct25 09:28] overlayfs: idmapped layers are currently not supported
	[ +37.283444] overlayfs: idmapped layers are currently not supported
	[Oct25 09:29] overlayfs: idmapped layers are currently not supported
	[ +38.328802] overlayfs: idmapped layers are currently not supported
	[Oct25 09:30] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [bcf4d71e6b1de4fe6b57ea36d1a67e4ff6c9fbb13e3f617d8d29b4c99ca180d6] <==
	{"level":"info","ts":"2025-10-25T09:30:14.422767Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-10-25T09:30:14.421332Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-25T09:30:14.422951Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-25T09:30:14.421644Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"ea7e25599daad906","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2025-10-25T09:30:14.421777Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-25T09:30:14.424292Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-25T09:30:14.4244Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-25T09:30:14.855438Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-10-25T09:30:14.855488Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-10-25T09:30:14.855515Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-10-25T09:30:14.855534Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-10-25T09:30:14.855541Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-25T09:30:14.855565Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-10-25T09:30:14.855582Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-25T09:30:14.856621Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-25T09:30:14.860826Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-881642 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-25T09:30:14.861725Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-25T09:30:14.861847Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-25T09:30:14.861901Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-25T09:30:14.86194Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-25T09:30:14.865177Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-25T09:30:14.865855Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-25T09:30:14.876014Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-10-25T09:30:14.899296Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-25T09:30:14.899358Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 09:31:02 up  1:13,  0 user,  load average: 1.96, 2.82, 2.51
	Linux old-k8s-version-881642 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5cba2c7e85b3544cb89d9d159293ac8a0cb13c0e9cbd943ca940bf8edd8dfadc] <==
	I1025 09:30:37.621859       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 09:30:37.622289       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1025 09:30:37.622444       1 main.go:148] setting mtu 1500 for CNI 
	I1025 09:30:37.622483       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 09:30:37.622521       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T09:30:37Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 09:30:37.824151       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 09:30:37.824178       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 09:30:37.824187       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 09:30:37.824465       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1025 09:30:38.124330       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 09:30:38.124359       1 metrics.go:72] Registering metrics
	I1025 09:30:38.124436       1 controller.go:711] "Syncing nftables rules"
	I1025 09:30:47.831120       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1025 09:30:47.831162       1 main.go:301] handling current node
	I1025 09:30:57.826148       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1025 09:30:57.826184       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f6de032446a37cec2f04ef6c659dec92a83eaa966631e49954aa798169248056] <==
	I1025 09:30:18.352490       1 controller.go:624] quota admission added evaluator for: namespaces
	I1025 09:30:18.366815       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 09:30:18.375240       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1025 09:30:18.375476       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 09:30:18.385968       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1025 09:30:18.386869       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1025 09:30:18.398115       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1025 09:30:18.408064       1 cache.go:39] Caches are synced for autoregister controller
	I1025 09:30:19.181927       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1025 09:30:19.187586       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1025 09:30:19.187610       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 09:30:19.952707       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 09:30:20.026142       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 09:30:20.103844       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1025 09:30:20.114211       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1025 09:30:20.115356       1 controller.go:624] quota admission added evaluator for: endpoints
	I1025 09:30:20.121418       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 09:30:20.351603       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1025 09:30:21.674711       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1025 09:30:21.691350       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1025 09:30:21.708686       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1025 09:30:33.862880       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1025 09:30:34.016635       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	E1025 09:31:00.545151       1 upgradeaware.go:439] Error proxying data from backend to client: write tcp 192.168.76.2:8443->192.168.76.1:56966: write: broken pipe
	E1025 09:31:00.545488       1 upgradeaware.go:425] Error proxying data from client to backend: write tcp 192.168.76.2:45456->192.168.76.2:10250: write: broken pipe
	
	
	==> kube-controller-manager [0d4bf523891d7317b62e16f1bdaa99435100a8f478d7ebc1a52d351738592bff] <==
	I1025 09:30:33.390264       1 shared_informer.go:318] Caches are synced for resource quota
	I1025 09:30:33.395537       1 shared_informer.go:318] Caches are synced for crt configmap
	I1025 09:30:33.399933       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I1025 09:30:33.749959       1 shared_informer.go:318] Caches are synced for garbage collector
	I1025 09:30:33.750007       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1025 09:30:33.779196       1 shared_informer.go:318] Caches are synced for garbage collector
	I1025 09:30:33.868419       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1025 09:30:34.030106       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-6929r"
	I1025 09:30:34.035902       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-nvxh8"
	I1025 09:30:34.246389       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-wfllg"
	I1025 09:30:34.291856       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-jsvbf"
	I1025 09:30:34.350308       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="481.574852ms"
	I1025 09:30:34.421720       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="71.360891ms"
	I1025 09:30:34.457302       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="35.528599ms"
	I1025 09:30:34.457407       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="67.586µs"
	I1025 09:30:35.874531       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1025 09:30:35.895477       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-wfllg"
	I1025 09:30:35.919578       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="46.04219ms"
	I1025 09:30:35.928887       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.265926ms"
	I1025 09:30:35.929846       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="64.108µs"
	I1025 09:30:48.274509       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="101.794µs"
	I1025 09:30:48.296949       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="53.154µs"
	I1025 09:30:48.350424       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1025 09:30:49.150282       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="14.239585ms"
	I1025 09:30:49.150364       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="48.034µs"
	
	
	==> kube-proxy [a325d7f3ab251102e07f8ccac32c7e0eb3a1ddbe590b4ece1f1c7bdd0c95b85c] <==
	I1025 09:30:34.945500       1 server_others.go:69] "Using iptables proxy"
	I1025 09:30:34.977628       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1025 09:30:35.034410       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:30:35.039323       1 server_others.go:152] "Using iptables Proxier"
	I1025 09:30:35.039361       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1025 09:30:35.039368       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1025 09:30:35.039391       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1025 09:30:35.039608       1 server.go:846] "Version info" version="v1.28.0"
	I1025 09:30:35.039619       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:30:35.040918       1 config.go:188] "Starting service config controller"
	I1025 09:30:35.040931       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1025 09:30:35.040947       1 config.go:97] "Starting endpoint slice config controller"
	I1025 09:30:35.040950       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1025 09:30:35.053467       1 config.go:315] "Starting node config controller"
	I1025 09:30:35.053492       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1025 09:30:35.141495       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1025 09:30:35.141571       1 shared_informer.go:318] Caches are synced for service config
	I1025 09:30:35.154017       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [1df38cff5281215cef583811b7c38c38eaa5e6d7778eaf2a531d4f9f2abbff7a] <==
	W1025 09:30:18.343014       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1025 09:30:18.343417       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1025 09:30:18.343208       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1025 09:30:18.343508       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1025 09:30:19.260322       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1025 09:30:19.260358       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1025 09:30:19.267932       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1025 09:30:19.268033       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1025 09:30:19.406365       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1025 09:30:19.406400       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1025 09:30:19.436007       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1025 09:30:19.436044       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1025 09:30:19.438748       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1025 09:30:19.438894       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1025 09:30:19.492758       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1025 09:30:19.492862       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1025 09:30:19.514956       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1025 09:30:19.515071       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1025 09:30:19.527891       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1025 09:30:19.527932       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1025 09:30:19.612132       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1025 09:30:19.612326       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1025 09:30:19.623748       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1025 09:30:19.623789       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1025 09:30:21.705696       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 25 09:30:34 old-k8s-version-881642 kubelet[1367]: I1025 09:30:34.044317    1367 topology_manager.go:215] "Topology Admit Handler" podUID="d632e328-6851-4ff1-95d9-839150635fbe" podNamespace="kube-system" podName="kube-proxy-6929r"
	Oct 25 09:30:34 old-k8s-version-881642 kubelet[1367]: I1025 09:30:34.069635    1367 topology_manager.go:215] "Topology Admit Handler" podUID="5b2c497b-fba6-4404-ae57-a77af6bb6247" podNamespace="kube-system" podName="kindnet-nvxh8"
	Oct 25 09:30:34 old-k8s-version-881642 kubelet[1367]: I1025 09:30:34.143732    1367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/5b2c497b-fba6-4404-ae57-a77af6bb6247-cni-cfg\") pod \"kindnet-nvxh8\" (UID: \"5b2c497b-fba6-4404-ae57-a77af6bb6247\") " pod="kube-system/kindnet-nvxh8"
	Oct 25 09:30:34 old-k8s-version-881642 kubelet[1367]: I1025 09:30:34.143793    1367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpgjs\" (UniqueName: \"kubernetes.io/projected/d632e328-6851-4ff1-95d9-839150635fbe-kube-api-access-jpgjs\") pod \"kube-proxy-6929r\" (UID: \"d632e328-6851-4ff1-95d9-839150635fbe\") " pod="kube-system/kube-proxy-6929r"
	Oct 25 09:30:34 old-k8s-version-881642 kubelet[1367]: I1025 09:30:34.143821    1367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5b2c497b-fba6-4404-ae57-a77af6bb6247-xtables-lock\") pod \"kindnet-nvxh8\" (UID: \"5b2c497b-fba6-4404-ae57-a77af6bb6247\") " pod="kube-system/kindnet-nvxh8"
	Oct 25 09:30:34 old-k8s-version-881642 kubelet[1367]: I1025 09:30:34.143876    1367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d632e328-6851-4ff1-95d9-839150635fbe-kube-proxy\") pod \"kube-proxy-6929r\" (UID: \"d632e328-6851-4ff1-95d9-839150635fbe\") " pod="kube-system/kube-proxy-6929r"
	Oct 25 09:30:34 old-k8s-version-881642 kubelet[1367]: I1025 09:30:34.143903    1367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d632e328-6851-4ff1-95d9-839150635fbe-lib-modules\") pod \"kube-proxy-6929r\" (UID: \"d632e328-6851-4ff1-95d9-839150635fbe\") " pod="kube-system/kube-proxy-6929r"
	Oct 25 09:30:34 old-k8s-version-881642 kubelet[1367]: I1025 09:30:34.143959    1367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5b2c497b-fba6-4404-ae57-a77af6bb6247-lib-modules\") pod \"kindnet-nvxh8\" (UID: \"5b2c497b-fba6-4404-ae57-a77af6bb6247\") " pod="kube-system/kindnet-nvxh8"
	Oct 25 09:30:34 old-k8s-version-881642 kubelet[1367]: I1025 09:30:34.143991    1367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d632e328-6851-4ff1-95d9-839150635fbe-xtables-lock\") pod \"kube-proxy-6929r\" (UID: \"d632e328-6851-4ff1-95d9-839150635fbe\") " pod="kube-system/kube-proxy-6929r"
	Oct 25 09:30:34 old-k8s-version-881642 kubelet[1367]: I1025 09:30:34.144026    1367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjqfv\" (UniqueName: \"kubernetes.io/projected/5b2c497b-fba6-4404-ae57-a77af6bb6247-kube-api-access-mjqfv\") pod \"kindnet-nvxh8\" (UID: \"5b2c497b-fba6-4404-ae57-a77af6bb6247\") " pod="kube-system/kindnet-nvxh8"
	Oct 25 09:30:34 old-k8s-version-881642 kubelet[1367]: W1025 09:30:34.655557    1367 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/e27d1cd7e425155b8d954cdf2863c1fb9aa2eb9c8de7b81cc7d6ed7da8c49306/crio-724aac56f8b819213305268f2086824205d254c1e0c3355cd016184b720a5b99 WatchSource:0}: Error finding container 724aac56f8b819213305268f2086824205d254c1e0c3355cd016184b720a5b99: Status 404 returned error can't find the container with id 724aac56f8b819213305268f2086824205d254c1e0c3355cd016184b720a5b99
	Oct 25 09:30:35 old-k8s-version-881642 kubelet[1367]: I1025 09:30:35.084714    1367 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-6929r" podStartSLOduration=1.084404946 podCreationTimestamp="2025-10-25 09:30:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:30:35.083230734 +0000 UTC m=+13.453652178" watchObservedRunningTime="2025-10-25 09:30:35.084404946 +0000 UTC m=+13.454826390"
	Oct 25 09:30:38 old-k8s-version-881642 kubelet[1367]: I1025 09:30:38.067916    1367 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-nvxh8" podStartSLOduration=1.2579676229999999 podCreationTimestamp="2025-10-25 09:30:34 +0000 UTC" firstStartedPulling="2025-10-25 09:30:34.696636867 +0000 UTC m=+13.067058311" lastFinishedPulling="2025-10-25 09:30:37.506538551 +0000 UTC m=+15.876960003" observedRunningTime="2025-10-25 09:30:38.066804733 +0000 UTC m=+16.437226357" watchObservedRunningTime="2025-10-25 09:30:38.067869315 +0000 UTC m=+16.438290759"
	Oct 25 09:30:48 old-k8s-version-881642 kubelet[1367]: I1025 09:30:48.238623    1367 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 25 09:30:48 old-k8s-version-881642 kubelet[1367]: I1025 09:30:48.273664    1367 topology_manager.go:215] "Topology Admit Handler" podUID="b48d8cb0-1b9b-47b6-9978-d0999af63891" podNamespace="kube-system" podName="coredns-5dd5756b68-jsvbf"
	Oct 25 09:30:48 old-k8s-version-881642 kubelet[1367]: I1025 09:30:48.281692    1367 topology_manager.go:215] "Topology Admit Handler" podUID="ad9a123b-2478-4593-a68f-18ba44c87403" podNamespace="kube-system" podName="storage-provisioner"
	Oct 25 09:30:48 old-k8s-version-881642 kubelet[1367]: I1025 09:30:48.368153    1367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ad9a123b-2478-4593-a68f-18ba44c87403-tmp\") pod \"storage-provisioner\" (UID: \"ad9a123b-2478-4593-a68f-18ba44c87403\") " pod="kube-system/storage-provisioner"
	Oct 25 09:30:48 old-k8s-version-881642 kubelet[1367]: I1025 09:30:48.368364    1367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6s226\" (UniqueName: \"kubernetes.io/projected/ad9a123b-2478-4593-a68f-18ba44c87403-kube-api-access-6s226\") pod \"storage-provisioner\" (UID: \"ad9a123b-2478-4593-a68f-18ba44c87403\") " pod="kube-system/storage-provisioner"
	Oct 25 09:30:48 old-k8s-version-881642 kubelet[1367]: I1025 09:30:48.368475    1367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b48d8cb0-1b9b-47b6-9978-d0999af63891-config-volume\") pod \"coredns-5dd5756b68-jsvbf\" (UID: \"b48d8cb0-1b9b-47b6-9978-d0999af63891\") " pod="kube-system/coredns-5dd5756b68-jsvbf"
	Oct 25 09:30:48 old-k8s-version-881642 kubelet[1367]: I1025 09:30:48.368570    1367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7vh7\" (UniqueName: \"kubernetes.io/projected/b48d8cb0-1b9b-47b6-9978-d0999af63891-kube-api-access-s7vh7\") pod \"coredns-5dd5756b68-jsvbf\" (UID: \"b48d8cb0-1b9b-47b6-9978-d0999af63891\") " pod="kube-system/coredns-5dd5756b68-jsvbf"
	Oct 25 09:30:48 old-k8s-version-881642 kubelet[1367]: W1025 09:30:48.610726    1367 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/e27d1cd7e425155b8d954cdf2863c1fb9aa2eb9c8de7b81cc7d6ed7da8c49306/crio-9c9187f0d27adc1d8ccc0d093542d144497888786a50eb2fd96fdedcb771df31 WatchSource:0}: Error finding container 9c9187f0d27adc1d8ccc0d093542d144497888786a50eb2fd96fdedcb771df31: Status 404 returned error can't find the container with id 9c9187f0d27adc1d8ccc0d093542d144497888786a50eb2fd96fdedcb771df31
	Oct 25 09:30:49 old-k8s-version-881642 kubelet[1367]: I1025 09:30:49.134339    1367 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-jsvbf" podStartSLOduration=15.134295816 podCreationTimestamp="2025-10-25 09:30:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:30:49.13136573 +0000 UTC m=+27.501787182" watchObservedRunningTime="2025-10-25 09:30:49.134295816 +0000 UTC m=+27.504717268"
	Oct 25 09:30:49 old-k8s-version-881642 kubelet[1367]: I1025 09:30:49.134601    1367 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.134576879 podCreationTimestamp="2025-10-25 09:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:30:49.116783798 +0000 UTC m=+27.487205259" watchObservedRunningTime="2025-10-25 09:30:49.134576879 +0000 UTC m=+27.504998347"
	Oct 25 09:30:51 old-k8s-version-881642 kubelet[1367]: I1025 09:30:51.266381    1367 topology_manager.go:215] "Topology Admit Handler" podUID="e0d22b63-119d-4a6a-aa7a-2f343c65f609" podNamespace="default" podName="busybox"
	Oct 25 09:30:51 old-k8s-version-881642 kubelet[1367]: I1025 09:30:51.390165    1367 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csw57\" (UniqueName: \"kubernetes.io/projected/e0d22b63-119d-4a6a-aa7a-2f343c65f609-kube-api-access-csw57\") pod \"busybox\" (UID: \"e0d22b63-119d-4a6a-aa7a-2f343c65f609\") " pod="default/busybox"
	
	
	==> storage-provisioner [ebc0721a8869e4dc2dcce878d2dc748af9ee83c39104f7f60e175eea3a9b9fbc] <==
	I1025 09:30:48.668594       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 09:30:48.705204       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 09:30:48.705258       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1025 09:30:48.717940       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 09:30:48.719102       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c5202046-721c-4659-94ff-20871396397e", APIVersion:"v1", ResourceVersion:"434", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-881642_84df09fa-ce69-4a3b-b132-a80de4d4b1e2 became leader
	I1025 09:30:48.721510       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-881642_84df09fa-ce69-4a3b-b132-a80de4d4b1e2!
	I1025 09:30:48.822662       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-881642_84df09fa-ce69-4a3b-b132-a80de4d4b1e2!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-881642 -n old-k8s-version-881642
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-881642 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-881642 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p old-k8s-version-881642 --alsologtostderr -v=1: exit status 80 (2.040371632s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-881642 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:32:13.420169  188326 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:32:13.420343  188326 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:32:13.420356  188326 out.go:374] Setting ErrFile to fd 2...
	I1025 09:32:13.420362  188326 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:32:13.420661  188326 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-2312/.minikube/bin
	I1025 09:32:13.420950  188326 out.go:368] Setting JSON to false
	I1025 09:32:13.420989  188326 mustload.go:65] Loading cluster: old-k8s-version-881642
	I1025 09:32:13.421401  188326 config.go:182] Loaded profile config "old-k8s-version-881642": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1025 09:32:13.421945  188326 cli_runner.go:164] Run: docker container inspect old-k8s-version-881642 --format={{.State.Status}}
	I1025 09:32:13.440851  188326 host.go:66] Checking if "old-k8s-version-881642" exists ...
	I1025 09:32:13.441240  188326 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:32:13.502680  188326 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-25 09:32:13.492576618 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:32:13.503359  188326 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-881642 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1025 09:32:13.506690  188326 out.go:179] * Pausing node old-k8s-version-881642 ... 
	I1025 09:32:13.509528  188326 host.go:66] Checking if "old-k8s-version-881642" exists ...
	I1025 09:32:13.509883  188326 ssh_runner.go:195] Run: systemctl --version
	I1025 09:32:13.509931  188326 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-881642
	I1025 09:32:13.529463  188326 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/old-k8s-version-881642/id_rsa Username:docker}
	I1025 09:32:13.632779  188326 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:32:13.646521  188326 pause.go:52] kubelet running: true
	I1025 09:32:13.646590  188326 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 09:32:13.881750  188326 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 09:32:13.881829  188326 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 09:32:13.953530  188326 cri.go:89] found id: "50da267439922b6c988559a9dca21888ddb2a1baa51d6996e715b7ac71d5086b"
	I1025 09:32:13.953551  188326 cri.go:89] found id: "a3ae672359d1b3d92f50d907651b263b2eb1d131e95244ceb88afdbc80c44259"
	I1025 09:32:13.953556  188326 cri.go:89] found id: "623cbdf8fdad5fc331b92d416fd5175dab8d10e94570d3ee7297c237142d1782"
	I1025 09:32:13.953560  188326 cri.go:89] found id: "30c881c61d527a0af79c93cf6bee55ac98ef7e8770d668d6b4d4c88d7bd21c98"
	I1025 09:32:13.953567  188326 cri.go:89] found id: "1284d0247dd7ad941ef70c0ac331a1f4338cfb58821a9120b1f3277898bb019b"
	I1025 09:32:13.953571  188326 cri.go:89] found id: "a3e33b746dfb8a954389ce5444704f1ea524fab04b19741824ed405f60130162"
	I1025 09:32:13.953578  188326 cri.go:89] found id: "0833e8f6388be20dd4e350ab56c1e849bebd8eeaf139810589e6f67d6a733ec2"
	I1025 09:32:13.953581  188326 cri.go:89] found id: "34dcf52af6e206b8192338245011e7a8be2e48a9c59c3fdf1ca2a7d5abd47011"
	I1025 09:32:13.953585  188326 cri.go:89] found id: "b09862ddb71a1aa66ca53228ebbdef5cc24a02c13d6704bac8682ab09a4c81b9"
	I1025 09:32:13.953591  188326 cri.go:89] found id: "46f9eaf860dd1682be7c874c140c123c6acd0b09817f0b777efc9b75e40fd409"
	I1025 09:32:13.953595  188326 cri.go:89] found id: "a5d14e1a4096f529878bd9efaddeb716003abd783455c1252932adfdb39b3bd1"
	I1025 09:32:13.953597  188326 cri.go:89] found id: ""
	I1025 09:32:13.953644  188326 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:32:13.973588  188326 retry.go:31] will retry after 339.81014ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:32:13Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:32:14.313933  188326 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:32:14.327069  188326 pause.go:52] kubelet running: false
	I1025 09:32:14.327154  188326 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 09:32:14.493939  188326 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 09:32:14.494076  188326 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 09:32:14.562209  188326 cri.go:89] found id: "50da267439922b6c988559a9dca21888ddb2a1baa51d6996e715b7ac71d5086b"
	I1025 09:32:14.562235  188326 cri.go:89] found id: "a3ae672359d1b3d92f50d907651b263b2eb1d131e95244ceb88afdbc80c44259"
	I1025 09:32:14.562240  188326 cri.go:89] found id: "623cbdf8fdad5fc331b92d416fd5175dab8d10e94570d3ee7297c237142d1782"
	I1025 09:32:14.562244  188326 cri.go:89] found id: "30c881c61d527a0af79c93cf6bee55ac98ef7e8770d668d6b4d4c88d7bd21c98"
	I1025 09:32:14.562257  188326 cri.go:89] found id: "1284d0247dd7ad941ef70c0ac331a1f4338cfb58821a9120b1f3277898bb019b"
	I1025 09:32:14.562261  188326 cri.go:89] found id: "a3e33b746dfb8a954389ce5444704f1ea524fab04b19741824ed405f60130162"
	I1025 09:32:14.562301  188326 cri.go:89] found id: "0833e8f6388be20dd4e350ab56c1e849bebd8eeaf139810589e6f67d6a733ec2"
	I1025 09:32:14.562306  188326 cri.go:89] found id: "34dcf52af6e206b8192338245011e7a8be2e48a9c59c3fdf1ca2a7d5abd47011"
	I1025 09:32:14.562309  188326 cri.go:89] found id: "b09862ddb71a1aa66ca53228ebbdef5cc24a02c13d6704bac8682ab09a4c81b9"
	I1025 09:32:14.562316  188326 cri.go:89] found id: "46f9eaf860dd1682be7c874c140c123c6acd0b09817f0b777efc9b75e40fd409"
	I1025 09:32:14.562322  188326 cri.go:89] found id: "a5d14e1a4096f529878bd9efaddeb716003abd783455c1252932adfdb39b3bd1"
	I1025 09:32:14.562326  188326 cri.go:89] found id: ""
	I1025 09:32:14.562398  188326 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:32:14.575596  188326 retry.go:31] will retry after 406.259872ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:32:14Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:32:14.982314  188326 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:32:15.004426  188326 pause.go:52] kubelet running: false
	I1025 09:32:15.004528  188326 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 09:32:15.270571  188326 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 09:32:15.270657  188326 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 09:32:15.360965  188326 cri.go:89] found id: "50da267439922b6c988559a9dca21888ddb2a1baa51d6996e715b7ac71d5086b"
	I1025 09:32:15.360989  188326 cri.go:89] found id: "a3ae672359d1b3d92f50d907651b263b2eb1d131e95244ceb88afdbc80c44259"
	I1025 09:32:15.360994  188326 cri.go:89] found id: "623cbdf8fdad5fc331b92d416fd5175dab8d10e94570d3ee7297c237142d1782"
	I1025 09:32:15.360998  188326 cri.go:89] found id: "30c881c61d527a0af79c93cf6bee55ac98ef7e8770d668d6b4d4c88d7bd21c98"
	I1025 09:32:15.361001  188326 cri.go:89] found id: "1284d0247dd7ad941ef70c0ac331a1f4338cfb58821a9120b1f3277898bb019b"
	I1025 09:32:15.361021  188326 cri.go:89] found id: "a3e33b746dfb8a954389ce5444704f1ea524fab04b19741824ed405f60130162"
	I1025 09:32:15.361025  188326 cri.go:89] found id: "0833e8f6388be20dd4e350ab56c1e849bebd8eeaf139810589e6f67d6a733ec2"
	I1025 09:32:15.361028  188326 cri.go:89] found id: "34dcf52af6e206b8192338245011e7a8be2e48a9c59c3fdf1ca2a7d5abd47011"
	I1025 09:32:15.361039  188326 cri.go:89] found id: "b09862ddb71a1aa66ca53228ebbdef5cc24a02c13d6704bac8682ab09a4c81b9"
	I1025 09:32:15.361047  188326 cri.go:89] found id: "46f9eaf860dd1682be7c874c140c123c6acd0b09817f0b777efc9b75e40fd409"
	I1025 09:32:15.361054  188326 cri.go:89] found id: "a5d14e1a4096f529878bd9efaddeb716003abd783455c1252932adfdb39b3bd1"
	I1025 09:32:15.361057  188326 cri.go:89] found id: ""
	I1025 09:32:15.361117  188326 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:32:15.384333  188326 out.go:203] 
	W1025 09:32:15.387193  188326 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:32:15Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:32:15Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:32:15.387211  188326 out.go:285] * 
	* 
	W1025 09:32:15.392035  188326 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:32:15.395379  188326 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p old-k8s-version-881642 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-881642
helpers_test.go:243: (dbg) docker inspect old-k8s-version-881642:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e27d1cd7e425155b8d954cdf2863c1fb9aa2eb9c8de7b81cc7d6ed7da8c49306",
	        "Created": "2025-10-25T09:29:58.440367349Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 186247,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:31:15.675289468Z",
	            "FinishedAt": "2025-10-25T09:31:14.774807582Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/e27d1cd7e425155b8d954cdf2863c1fb9aa2eb9c8de7b81cc7d6ed7da8c49306/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e27d1cd7e425155b8d954cdf2863c1fb9aa2eb9c8de7b81cc7d6ed7da8c49306/hostname",
	        "HostsPath": "/var/lib/docker/containers/e27d1cd7e425155b8d954cdf2863c1fb9aa2eb9c8de7b81cc7d6ed7da8c49306/hosts",
	        "LogPath": "/var/lib/docker/containers/e27d1cd7e425155b8d954cdf2863c1fb9aa2eb9c8de7b81cc7d6ed7da8c49306/e27d1cd7e425155b8d954cdf2863c1fb9aa2eb9c8de7b81cc7d6ed7da8c49306-json.log",
	        "Name": "/old-k8s-version-881642",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-881642:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-881642",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e27d1cd7e425155b8d954cdf2863c1fb9aa2eb9c8de7b81cc7d6ed7da8c49306",
	                "LowerDir": "/var/lib/docker/overlay2/6df03c5857d4595a180f3a88a7c703bebe35718ada3a63bcd7f20b5908953f91-init/diff:/var/lib/docker/overlay2/cef79fc16ca3e257f9c9390d34fae091a7f7fa219913b2606caf1922ea50ed93/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6df03c5857d4595a180f3a88a7c703bebe35718ada3a63bcd7f20b5908953f91/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6df03c5857d4595a180f3a88a7c703bebe35718ada3a63bcd7f20b5908953f91/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6df03c5857d4595a180f3a88a7c703bebe35718ada3a63bcd7f20b5908953f91/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-881642",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-881642/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-881642",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-881642",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-881642",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6f1c6d478cdefda7536d1e6673bd6842a51450e921bb065a381a45dd68cbb080",
	            "SandboxKey": "/var/run/docker/netns/6f1c6d478cde",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33053"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33054"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33057"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33055"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33056"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-881642": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:08:12:4a:f3:4b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "917e141362c271bc727ce35937091cd630c8eec9e1077a440c52d3089c688f49",
	                    "EndpointID": "55aa1d841f7915bc9e563feb2ddc033849e0129b5ac9a47977e3f64d78f53a7e",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-881642",
	                        "e27d1cd7e425"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-881642 -n old-k8s-version-881642
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-881642 -n old-k8s-version-881642: exit status 2 (425.308216ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-881642 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-881642 logs -n 25: (1.665610613s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-068349 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-068349             │ jenkins │ v1.37.0 │ 25 Oct 25 09:28 UTC │                     │
	│ ssh     │ -p cilium-068349 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-068349             │ jenkins │ v1.37.0 │ 25 Oct 25 09:28 UTC │                     │
	│ ssh     │ -p cilium-068349 sudo containerd config dump                                                                                                                                                                                                  │ cilium-068349             │ jenkins │ v1.37.0 │ 25 Oct 25 09:28 UTC │                     │
	│ ssh     │ -p cilium-068349 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-068349             │ jenkins │ v1.37.0 │ 25 Oct 25 09:28 UTC │                     │
	│ ssh     │ -p cilium-068349 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-068349             │ jenkins │ v1.37.0 │ 25 Oct 25 09:28 UTC │                     │
	│ ssh     │ -p cilium-068349 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-068349             │ jenkins │ v1.37.0 │ 25 Oct 25 09:28 UTC │                     │
	│ ssh     │ -p cilium-068349 sudo crio config                                                                                                                                                                                                             │ cilium-068349             │ jenkins │ v1.37.0 │ 25 Oct 25 09:28 UTC │                     │
	│ delete  │ -p cilium-068349                                                                                                                                                                                                                              │ cilium-068349             │ jenkins │ v1.37.0 │ 25 Oct 25 09:28 UTC │ 25 Oct 25 09:28 UTC │
	│ start   │ -p force-systemd-env-991333 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-991333  │ jenkins │ v1.37.0 │ 25 Oct 25 09:28 UTC │ 25 Oct 25 09:29 UTC │
	│ ssh     │ force-systemd-flag-100847 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-100847 │ jenkins │ v1.37.0 │ 25 Oct 25 09:28 UTC │ 25 Oct 25 09:28 UTC │
	│ delete  │ -p force-systemd-flag-100847                                                                                                                                                                                                                  │ force-systemd-flag-100847 │ jenkins │ v1.37.0 │ 25 Oct 25 09:28 UTC │ 25 Oct 25 09:28 UTC │
	│ start   │ -p cert-expiration-440252 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-440252    │ jenkins │ v1.37.0 │ 25 Oct 25 09:28 UTC │ 25 Oct 25 09:29 UTC │
	│ delete  │ -p force-systemd-env-991333                                                                                                                                                                                                                   │ force-systemd-env-991333  │ jenkins │ v1.37.0 │ 25 Oct 25 09:29 UTC │ 25 Oct 25 09:29 UTC │
	│ start   │ -p cert-options-483456 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-483456       │ jenkins │ v1.37.0 │ 25 Oct 25 09:29 UTC │ 25 Oct 25 09:29 UTC │
	│ ssh     │ cert-options-483456 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-483456       │ jenkins │ v1.37.0 │ 25 Oct 25 09:29 UTC │ 25 Oct 25 09:29 UTC │
	│ ssh     │ -p cert-options-483456 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-483456       │ jenkins │ v1.37.0 │ 25 Oct 25 09:29 UTC │ 25 Oct 25 09:29 UTC │
	│ delete  │ -p cert-options-483456                                                                                                                                                                                                                        │ cert-options-483456       │ jenkins │ v1.37.0 │ 25 Oct 25 09:29 UTC │ 25 Oct 25 09:29 UTC │
	│ start   │ -p old-k8s-version-881642 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-881642    │ jenkins │ v1.37.0 │ 25 Oct 25 09:29 UTC │ 25 Oct 25 09:30 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-881642 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-881642    │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │                     │
	│ stop    │ -p old-k8s-version-881642 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-881642    │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │ 25 Oct 25 09:31 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-881642 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-881642    │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │ 25 Oct 25 09:31 UTC │
	│ start   │ -p old-k8s-version-881642 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-881642    │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │ 25 Oct 25 09:32 UTC │
	│ image   │ old-k8s-version-881642 image list --format=json                                                                                                                                                                                               │ old-k8s-version-881642    │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:32 UTC │
	│ pause   │ -p old-k8s-version-881642 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-881642    │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │                     │
	│ start   │ -p cert-expiration-440252 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-440252    │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:32:14
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:32:14.783601  188476 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:32:14.783706  188476 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:32:14.783709  188476 out.go:374] Setting ErrFile to fd 2...
	I1025 09:32:14.783714  188476 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:32:14.784044  188476 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-2312/.minikube/bin
	I1025 09:32:14.784470  188476 out.go:368] Setting JSON to false
	I1025 09:32:14.785664  188476 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4486,"bootTime":1761380249,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1025 09:32:14.785719  188476 start.go:141] virtualization:  
	I1025 09:32:14.789350  188476 out.go:179] * [cert-expiration-440252] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 09:32:14.792490  188476 notify.go:220] Checking for updates...
	I1025 09:32:14.796375  188476 out.go:179]   - MINIKUBE_LOCATION=21796
	I1025 09:32:14.799321  188476 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:32:14.802164  188476 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21796-2312/kubeconfig
	I1025 09:32:14.805012  188476 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-2312/.minikube
	I1025 09:32:14.807928  188476 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 09:32:14.810874  188476 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:32:14.814169  188476 config.go:182] Loaded profile config "cert-expiration-440252": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:32:14.814701  188476 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:32:14.849936  188476 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 09:32:14.850096  188476 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:32:14.908083  188476 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-25 09:32:14.898124451 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:32:14.908193  188476 docker.go:318] overlay module found
	I1025 09:32:14.911272  188476 out.go:179] * Using the docker driver based on existing profile
	I1025 09:32:14.914183  188476 start.go:305] selected driver: docker
	I1025 09:32:14.914191  188476 start.go:925] validating driver "docker" against &{Name:cert-expiration-440252 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-440252 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:32:14.914326  188476 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:32:14.915047  188476 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:32:14.980373  188476 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-25 09:32:14.963546874 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:32:14.980791  188476 cni.go:84] Creating CNI manager for ""
	I1025 09:32:14.980883  188476 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:32:14.980923  188476 start.go:349] cluster config:
	{Name:cert-expiration-440252 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-440252 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1025 09:32:14.984772  188476 out.go:179] * Starting "cert-expiration-440252" primary control-plane node in "cert-expiration-440252" cluster
	I1025 09:32:14.988053  188476 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 09:32:14.991108  188476 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:32:14.994777  188476 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:32:14.994847  188476 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21796-2312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1025 09:32:14.994855  188476 cache.go:58] Caching tarball of preloaded images
	I1025 09:32:14.994988  188476 preload.go:233] Found /home/jenkins/minikube-integration/21796-2312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 09:32:14.995004  188476 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 09:32:14.995128  188476 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/cert-expiration-440252/config.json ...
	I1025 09:32:14.995408  188476 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:32:15.045949  188476 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 09:32:15.045962  188476 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 09:32:15.045977  188476 cache.go:232] Successfully downloaded all kic artifacts
	I1025 09:32:15.046032  188476 start.go:360] acquireMachinesLock for cert-expiration-440252: {Name:mkce563f9d4415fd837f6909883a2b22117c71eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:32:15.046094  188476 start.go:364] duration metric: took 44.415µs to acquireMachinesLock for "cert-expiration-440252"
	I1025 09:32:15.046115  188476 start.go:96] Skipping create...Using existing machine configuration
	I1025 09:32:15.046119  188476 fix.go:54] fixHost starting: 
	I1025 09:32:15.046431  188476 cli_runner.go:164] Run: docker container inspect cert-expiration-440252 --format={{.State.Status}}
	I1025 09:32:15.080273  188476 fix.go:112] recreateIfNeeded on cert-expiration-440252: state=Running err=<nil>
	W1025 09:32:15.080303  188476 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Oct 25 09:31:54 old-k8s-version-881642 crio[649]: time="2025-10-25T09:31:54.880469279Z" level=info msg="Removed container 60baea785f6966f76ac00e4e95fd38c1b7caf6402c878143bf4196cd44485a1a: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-wxj8r/dashboard-metrics-scraper" id=cf3f0c89-fcbd-4a05-aab0-7d994d9ff509 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 09:31:59 old-k8s-version-881642 conmon[1147]: conmon 623cbdf8fdad5fc331b9 <ninfo>: container 1156 exited with status 1
	Oct 25 09:31:59 old-k8s-version-881642 crio[649]: time="2025-10-25T09:31:59.87407615Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=cf8b7e32-1406-4577-bddc-6b62c6a2dee3 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:31:59 old-k8s-version-881642 crio[649]: time="2025-10-25T09:31:59.875388652Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=82a5a00a-a060-4f2d-ba93-442b8c69965a name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:31:59 old-k8s-version-881642 crio[649]: time="2025-10-25T09:31:59.877922373Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=e066f016-641c-4fc8-99a6-e7eb048e3dd6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:31:59 old-k8s-version-881642 crio[649]: time="2025-10-25T09:31:59.878055635Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:31:59 old-k8s-version-881642 crio[649]: time="2025-10-25T09:31:59.886399692Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:31:59 old-k8s-version-881642 crio[649]: time="2025-10-25T09:31:59.886581455Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/fbcad0923bbe3ed9d3be4c7a93fdf1285f3116c5d4017d60069408c4bcab5e7c/merged/etc/passwd: no such file or directory"
	Oct 25 09:31:59 old-k8s-version-881642 crio[649]: time="2025-10-25T09:31:59.886610452Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/fbcad0923bbe3ed9d3be4c7a93fdf1285f3116c5d4017d60069408c4bcab5e7c/merged/etc/group: no such file or directory"
	Oct 25 09:31:59 old-k8s-version-881642 crio[649]: time="2025-10-25T09:31:59.886877434Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:31:59 old-k8s-version-881642 crio[649]: time="2025-10-25T09:31:59.902927198Z" level=info msg="Created container 50da267439922b6c988559a9dca21888ddb2a1baa51d6996e715b7ac71d5086b: kube-system/storage-provisioner/storage-provisioner" id=e066f016-641c-4fc8-99a6-e7eb048e3dd6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:31:59 old-k8s-version-881642 crio[649]: time="2025-10-25T09:31:59.90468687Z" level=info msg="Starting container: 50da267439922b6c988559a9dca21888ddb2a1baa51d6996e715b7ac71d5086b" id=9926cf12-3033-4b69-aa1b-d48fcfce7b14 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:31:59 old-k8s-version-881642 crio[649]: time="2025-10-25T09:31:59.906341637Z" level=info msg="Started container" PID=1625 containerID=50da267439922b6c988559a9dca21888ddb2a1baa51d6996e715b7ac71d5086b description=kube-system/storage-provisioner/storage-provisioner id=9926cf12-3033-4b69-aa1b-d48fcfce7b14 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f69fb813686bfddf7565ef61a42eeb8e604f564178fcc2d9b9b2edd932f94d5c
	Oct 25 09:32:09 old-k8s-version-881642 crio[649]: time="2025-10-25T09:32:09.470240716Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 09:32:09 old-k8s-version-881642 crio[649]: time="2025-10-25T09:32:09.478389373Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 09:32:09 old-k8s-version-881642 crio[649]: time="2025-10-25T09:32:09.478425681Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 09:32:09 old-k8s-version-881642 crio[649]: time="2025-10-25T09:32:09.478448024Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 09:32:09 old-k8s-version-881642 crio[649]: time="2025-10-25T09:32:09.481815718Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 09:32:09 old-k8s-version-881642 crio[649]: time="2025-10-25T09:32:09.481852518Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 09:32:09 old-k8s-version-881642 crio[649]: time="2025-10-25T09:32:09.481877807Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 09:32:09 old-k8s-version-881642 crio[649]: time="2025-10-25T09:32:09.485090421Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 09:32:09 old-k8s-version-881642 crio[649]: time="2025-10-25T09:32:09.485129593Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 09:32:09 old-k8s-version-881642 crio[649]: time="2025-10-25T09:32:09.485155735Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 09:32:09 old-k8s-version-881642 crio[649]: time="2025-10-25T09:32:09.488340984Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 09:32:09 old-k8s-version-881642 crio[649]: time="2025-10-25T09:32:09.488378884Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	50da267439922       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           16 seconds ago      Running             storage-provisioner         2                   f69fb813686bf       storage-provisioner                              kube-system
	46f9eaf860dd1       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           22 seconds ago      Exited              dashboard-metrics-scraper   1                   e635f838b8bb3       dashboard-metrics-scraper-5f989dc9cf-wxj8r       kubernetes-dashboard
	a5d14e1a4096f       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   28 seconds ago      Running             kubernetes-dashboard        0                   9cb9b374813ff       kubernetes-dashboard-8694d4445c-pvtmx            kubernetes-dashboard
	a3ae672359d1b       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           47 seconds ago      Running             kindnet-cni                 1                   adc4e7f0ab91f       kindnet-nvxh8                                    kube-system
	623cbdf8fdad5       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           47 seconds ago      Exited              storage-provisioner         1                   f69fb813686bf       storage-provisioner                              kube-system
	30c881c61d527       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           47 seconds ago      Running             coredns                     1                   4297a1634b9ea       coredns-5dd5756b68-jsvbf                         kube-system
	1b2d1e8a3322d       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           47 seconds ago      Running             busybox                     1                   09adb8189e13f       busybox                                          default
	1284d0247dd7a       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           47 seconds ago      Running             kube-proxy                  1                   4e2e08b99b31f       kube-proxy-6929r                                 kube-system
	a3e33b746dfb8       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           53 seconds ago      Running             kube-scheduler              1                   7e3ce22e39482       kube-scheduler-old-k8s-version-881642            kube-system
	0833e8f6388be       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           53 seconds ago      Running             etcd                        1                   ba81de30004f9       etcd-old-k8s-version-881642                      kube-system
	34dcf52af6e20       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           53 seconds ago      Running             kube-controller-manager     1                   901148e6711fe       kube-controller-manager-old-k8s-version-881642   kube-system
	b09862ddb71a1       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           53 seconds ago      Running             kube-apiserver              1                   60d48d2bd1916       kube-apiserver-old-k8s-version-881642            kube-system
	
	
	==> coredns [30c881c61d527a0af79c93cf6bee55ac98ef7e8770d668d6b4d4c88d7bd21c98] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:41768 - 57361 "HINFO IN 1582655585945343120.1432785718172594991. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024236058s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-881642
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-881642
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3de5ce1075e776fa55a26fe8396669cc53a4373
	                    minikube.k8s.io/name=old-k8s-version-881642
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_30_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:30:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-881642
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:32:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:31:59 +0000   Sat, 25 Oct 2025 09:30:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:31:59 +0000   Sat, 25 Oct 2025 09:30:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:31:59 +0000   Sat, 25 Oct 2025 09:30:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:31:59 +0000   Sat, 25 Oct 2025 09:30:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-881642
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                7da6e2e0-c6f7-4303-a7ca-65b12f9698fc
	  Boot ID:                    89aa6167-e700-462e-8969-7739a9f33dc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 coredns-5dd5756b68-jsvbf                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     103s
	  kube-system                 etcd-old-k8s-version-881642                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         115s
	  kube-system                 kindnet-nvxh8                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      103s
	  kube-system                 kube-apiserver-old-k8s-version-881642             250m (12%)    0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-old-k8s-version-881642    200m (10%)    0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-6929r                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-scheduler-old-k8s-version-881642             100m (5%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-wxj8r        0 (0%)        0 (0%)      0 (0%)           0 (0%)         37s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-pvtmx             0 (0%)        0 (0%)      0 (0%)           0 (0%)         37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 101s                 kube-proxy       
	  Normal  Starting                 47s                  kube-proxy       
	  Normal  Starting                 2m4s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m4s (x8 over 2m4s)  kubelet          Node old-k8s-version-881642 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m4s (x8 over 2m4s)  kubelet          Node old-k8s-version-881642 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m4s (x8 over 2m4s)  kubelet          Node old-k8s-version-881642 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    116s                 kubelet          Node old-k8s-version-881642 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  116s                 kubelet          Node old-k8s-version-881642 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     116s                 kubelet          Node old-k8s-version-881642 status is now: NodeHasSufficientPID
	  Normal  Starting                 116s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           104s                 node-controller  Node old-k8s-version-881642 event: Registered Node old-k8s-version-881642 in Controller
	  Normal  NodeReady                89s                  kubelet          Node old-k8s-version-881642 status is now: NodeReady
	  Normal  Starting                 55s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  55s (x8 over 55s)    kubelet          Node old-k8s-version-881642 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    55s (x8 over 55s)    kubelet          Node old-k8s-version-881642 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     55s (x8 over 55s)    kubelet          Node old-k8s-version-881642 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           36s                  node-controller  Node old-k8s-version-881642 event: Registered Node old-k8s-version-881642 in Controller
	
	
	==> dmesg <==
	[Oct25 09:07] overlayfs: idmapped layers are currently not supported
	[Oct25 09:08] overlayfs: idmapped layers are currently not supported
	[Oct25 09:09] overlayfs: idmapped layers are currently not supported
	[Oct25 09:10] overlayfs: idmapped layers are currently not supported
	[Oct25 09:11] overlayfs: idmapped layers are currently not supported
	[Oct25 09:13] overlayfs: idmapped layers are currently not supported
	[ +18.632418] overlayfs: idmapped layers are currently not supported
	[ +13.271347] overlayfs: idmapped layers are currently not supported
	[Oct25 09:14] overlayfs: idmapped layers are currently not supported
	[Oct25 09:15] overlayfs: idmapped layers are currently not supported
	[ +40.253798] overlayfs: idmapped layers are currently not supported
	[Oct25 09:16] overlayfs: idmapped layers are currently not supported
	[Oct25 09:17] overlayfs: idmapped layers are currently not supported
	[Oct25 09:18] overlayfs: idmapped layers are currently not supported
	[  +9.191019] hrtimer: interrupt took 13462946 ns
	[Oct25 09:20] overlayfs: idmapped layers are currently not supported
	[Oct25 09:22] overlayfs: idmapped layers are currently not supported
	[Oct25 09:23] overlayfs: idmapped layers are currently not supported
	[Oct25 09:24] overlayfs: idmapped layers are currently not supported
	[Oct25 09:28] overlayfs: idmapped layers are currently not supported
	[ +37.283444] overlayfs: idmapped layers are currently not supported
	[Oct25 09:29] overlayfs: idmapped layers are currently not supported
	[ +38.328802] overlayfs: idmapped layers are currently not supported
	[Oct25 09:30] overlayfs: idmapped layers are currently not supported
	[Oct25 09:31] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [0833e8f6388be20dd4e350ab56c1e849bebd8eeaf139810589e6f67d6a733ec2] <==
	{"level":"info","ts":"2025-10-25T09:31:23.591365Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-25T09:31:23.591528Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-25T09:31:23.591312Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-10-25T09:31:23.591752Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-10-25T09:31:23.591937Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-25T09:31:23.59201Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-25T09:31:23.642069Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-25T09:31:23.642114Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-25T09:31:23.637101Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-25T09:31:23.645726Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-25T09:31:23.645679Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-25T09:31:25.051376Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-25T09:31:25.051489Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-25T09:31:25.051546Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-25T09:31:25.051586Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-10-25T09:31:25.051618Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-25T09:31:25.051659Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-10-25T09:31:25.051696Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-25T09:31:25.056125Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-881642 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-25T09:31:25.05636Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-25T09:31:25.05901Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-10-25T09:31:25.060647Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-25T09:31:25.06159Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-25T09:31:25.065492Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-25T09:31:25.065528Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 09:32:17 up  1:14,  0 user,  load average: 1.37, 2.47, 2.41
	Linux old-k8s-version-881642 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a3ae672359d1b3d92f50d907651b263b2eb1d131e95244ceb88afdbc80c44259] <==
	I1025 09:31:29.267184       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 09:31:29.314380       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1025 09:31:29.314530       1 main.go:148] setting mtu 1500 for CNI 
	I1025 09:31:29.314543       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 09:31:29.314557       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T09:31:29Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 09:31:29.469179       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 09:31:29.469196       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 09:31:29.469205       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 09:31:29.469866       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1025 09:31:59.469329       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1025 09:31:59.469443       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1025 09:31:59.470327       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1025 09:31:59.470340       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1025 09:32:01.069541       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 09:32:01.069570       1 metrics.go:72] Registering metrics
	I1025 09:32:01.069643       1 controller.go:711] "Syncing nftables rules"
	I1025 09:32:09.469787       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1025 09:32:09.469941       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b09862ddb71a1aa66ca53228ebbdef5cc24a02c13d6704bac8682ab09a4c81b9] <==
	I1025 09:31:28.372850       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1025 09:31:28.378934       1 aggregator.go:166] initial CRD sync complete...
	I1025 09:31:28.379141       1 autoregister_controller.go:141] Starting autoregister controller
	I1025 09:31:28.379181       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 09:31:28.379220       1 cache.go:39] Caches are synced for autoregister controller
	I1025 09:31:28.384010       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	E1025 09:31:28.395366       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1025 09:31:28.398357       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1025 09:31:28.398383       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1025 09:31:28.414589       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1025 09:31:28.472931       1 shared_informer.go:318] Caches are synced for configmaps
	I1025 09:31:28.473022       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1025 09:31:28.473064       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 09:31:28.487991       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1025 09:31:29.017311       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 09:31:30.287593       1 controller.go:624] quota admission added evaluator for: namespaces
	I1025 09:31:30.335953       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1025 09:31:30.379783       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 09:31:30.394887       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 09:31:30.413697       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1025 09:31:30.497670       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.30.155"}
	I1025 09:31:30.535689       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.0.105"}
	I1025 09:31:40.891837       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1025 09:31:40.899357       1 controller.go:624] quota admission added evaluator for: endpoints
	I1025 09:31:41.082989       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [34dcf52af6e206b8192338245011e7a8be2e48a9c59c3fdf1ca2a7d5abd47011] <==
	I1025 09:31:41.051601       1 taint_manager.go:211] "Sending events to api server"
	I1025 09:31:41.051675       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="old-k8s-version-881642"
	I1025 09:31:41.051749       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1025 09:31:41.052006       1 event.go:307] "Event occurred" object="old-k8s-version-881642" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-881642 event: Registered Node old-k8s-version-881642 in Controller"
	I1025 09:31:41.056070       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="66.405µs"
	I1025 09:31:41.062182       1 shared_informer.go:318] Caches are synced for daemon sets
	I1025 09:31:41.064248       1 shared_informer.go:318] Caches are synced for node
	I1025 09:31:41.064362       1 range_allocator.go:174] "Sending events to api server"
	I1025 09:31:41.064419       1 range_allocator.go:178] "Starting range CIDR allocator"
	I1025 09:31:41.064449       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I1025 09:31:41.064478       1 shared_informer.go:318] Caches are synced for cidrallocator
	I1025 09:31:41.070121       1 shared_informer.go:318] Caches are synced for attach detach
	I1025 09:31:41.070176       1 shared_informer.go:318] Caches are synced for GC
	I1025 09:31:41.086337       1 shared_informer.go:318] Caches are synced for TTL
	I1025 09:31:41.089929       1 shared_informer.go:318] Caches are synced for persistent volume
	I1025 09:31:41.456162       1 shared_informer.go:318] Caches are synced for garbage collector
	I1025 09:31:41.456194       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1025 09:31:41.471433       1 shared_informer.go:318] Caches are synced for garbage collector
	I1025 09:31:48.871755       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="17.767123ms"
	I1025 09:31:48.872002       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="89.61µs"
	I1025 09:31:53.877782       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="54.516µs"
	I1025 09:31:54.878947       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="64.698µs"
	I1025 09:31:55.875693       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="69.179µs"
	I1025 09:31:59.661960       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="10.863469ms"
	I1025 09:31:59.662199       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="60.661µs"
	
	
	==> kube-proxy [1284d0247dd7ad941ef70c0ac331a1f4338cfb58821a9120b1f3277898bb019b] <==
	I1025 09:31:29.633165       1 server_others.go:69] "Using iptables proxy"
	I1025 09:31:29.659239       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1025 09:31:29.765724       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:31:29.768050       1 server_others.go:152] "Using iptables Proxier"
	I1025 09:31:29.768089       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1025 09:31:29.768097       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1025 09:31:29.768121       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1025 09:31:29.769325       1 server.go:846] "Version info" version="v1.28.0"
	I1025 09:31:29.769346       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:31:29.770166       1 config.go:188] "Starting service config controller"
	I1025 09:31:29.770191       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1025 09:31:29.770209       1 config.go:97] "Starting endpoint slice config controller"
	I1025 09:31:29.770212       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1025 09:31:29.777139       1 config.go:315] "Starting node config controller"
	I1025 09:31:29.777165       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1025 09:31:29.870435       1 shared_informer.go:318] Caches are synced for service config
	I1025 09:31:29.870500       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1025 09:31:29.878238       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [a3e33b746dfb8a954389ce5444704f1ea524fab04b19741824ed405f60130162] <==
	I1025 09:31:27.422736       1 serving.go:348] Generated self-signed cert in-memory
	I1025 09:31:28.842995       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1025 09:31:28.843032       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:31:28.854536       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1025 09:31:28.854625       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1025 09:31:28.854559       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1025 09:31:28.854728       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1025 09:31:28.854594       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:31:28.859030       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1025 09:31:28.854610       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 09:31:28.859386       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1025 09:31:28.954873       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1025 09:31:28.959382       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1025 09:31:28.959425       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Oct 25 09:31:41 old-k8s-version-881642 kubelet[774]: I1025 09:31:41.091637     774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p57r\" (UniqueName: \"kubernetes.io/projected/9b2d4511-c88f-448a-a3fa-07d4208a42ba-kube-api-access-4p57r\") pod \"dashboard-metrics-scraper-5f989dc9cf-wxj8r\" (UID: \"9b2d4511-c88f-448a-a3fa-07d4208a42ba\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-wxj8r"
	Oct 25 09:31:41 old-k8s-version-881642 kubelet[774]: I1025 09:31:41.091812     774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86qsb\" (UniqueName: \"kubernetes.io/projected/72a9c952-6f92-4ca8-8bcb-dee91a24fd0c-kube-api-access-86qsb\") pod \"kubernetes-dashboard-8694d4445c-pvtmx\" (UID: \"72a9c952-6f92-4ca8-8bcb-dee91a24fd0c\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-pvtmx"
	Oct 25 09:31:41 old-k8s-version-881642 kubelet[774]: I1025 09:31:41.091940     774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/72a9c952-6f92-4ca8-8bcb-dee91a24fd0c-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-pvtmx\" (UID: \"72a9c952-6f92-4ca8-8bcb-dee91a24fd0c\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-pvtmx"
	Oct 25 09:31:42 old-k8s-version-881642 kubelet[774]: E1025 09:31:42.209041     774 projected.go:292] Couldn't get configMap kubernetes-dashboard/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Oct 25 09:31:42 old-k8s-version-881642 kubelet[774]: E1025 09:31:42.209685     774 projected.go:198] Error preparing data for projected volume kube-api-access-86qsb for pod kubernetes-dashboard/kubernetes-dashboard-8694d4445c-pvtmx: failed to sync configmap cache: timed out waiting for the condition
	Oct 25 09:31:42 old-k8s-version-881642 kubelet[774]: E1025 09:31:42.209818     774 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/72a9c952-6f92-4ca8-8bcb-dee91a24fd0c-kube-api-access-86qsb podName:72a9c952-6f92-4ca8-8bcb-dee91a24fd0c nodeName:}" failed. No retries permitted until 2025-10-25 09:31:42.709787887 +0000 UTC m=+20.204581975 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-86qsb" (UniqueName: "kubernetes.io/projected/72a9c952-6f92-4ca8-8bcb-dee91a24fd0c-kube-api-access-86qsb") pod "kubernetes-dashboard-8694d4445c-pvtmx" (UID: "72a9c952-6f92-4ca8-8bcb-dee91a24fd0c") : failed to sync configmap cache: timed out waiting for the condition
	Oct 25 09:31:42 old-k8s-version-881642 kubelet[774]: E1025 09:31:42.209047     774 projected.go:292] Couldn't get configMap kubernetes-dashboard/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Oct 25 09:31:42 old-k8s-version-881642 kubelet[774]: E1025 09:31:42.210091     774 projected.go:198] Error preparing data for projected volume kube-api-access-4p57r for pod kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-wxj8r: failed to sync configmap cache: timed out waiting for the condition
	Oct 25 09:31:42 old-k8s-version-881642 kubelet[774]: E1025 09:31:42.210236     774 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9b2d4511-c88f-448a-a3fa-07d4208a42ba-kube-api-access-4p57r podName:9b2d4511-c88f-448a-a3fa-07d4208a42ba nodeName:}" failed. No retries permitted until 2025-10-25 09:31:42.710215757 +0000 UTC m=+20.205009853 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4p57r" (UniqueName: "kubernetes.io/projected/9b2d4511-c88f-448a-a3fa-07d4208a42ba-kube-api-access-4p57r") pod "dashboard-metrics-scraper-5f989dc9cf-wxj8r" (UID: "9b2d4511-c88f-448a-a3fa-07d4208a42ba") : failed to sync configmap cache: timed out waiting for the condition
	Oct 25 09:31:43 old-k8s-version-881642 kubelet[774]: W1025 09:31:43.121235     774 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/e27d1cd7e425155b8d954cdf2863c1fb9aa2eb9c8de7b81cc7d6ed7da8c49306/crio-9cb9b374813fff7b22fbc9ba36edf6a8f8f5599c825027505443895c42645e68 WatchSource:0}: Error finding container 9cb9b374813fff7b22fbc9ba36edf6a8f8f5599c825027505443895c42645e68: Status 404 returned error can't find the container with id 9cb9b374813fff7b22fbc9ba36edf6a8f8f5599c825027505443895c42645e68
	Oct 25 09:31:43 old-k8s-version-881642 kubelet[774]: W1025 09:31:43.139430     774 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/e27d1cd7e425155b8d954cdf2863c1fb9aa2eb9c8de7b81cc7d6ed7da8c49306/crio-e635f838b8bb37427845c52a10a94689bad95e6ecace9eff4dfbcaa9d7e2fe02 WatchSource:0}: Error finding container e635f838b8bb37427845c52a10a94689bad95e6ecace9eff4dfbcaa9d7e2fe02: Status 404 returned error can't find the container with id e635f838b8bb37427845c52a10a94689bad95e6ecace9eff4dfbcaa9d7e2fe02
	Oct 25 09:31:53 old-k8s-version-881642 kubelet[774]: I1025 09:31:53.850072     774 scope.go:117] "RemoveContainer" containerID="60baea785f6966f76ac00e4e95fd38c1b7caf6402c878143bf4196cd44485a1a"
	Oct 25 09:31:53 old-k8s-version-881642 kubelet[774]: I1025 09:31:53.878289     774 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-pvtmx" podStartSLOduration=8.542823582 podCreationTimestamp="2025-10-25 09:31:40 +0000 UTC" firstStartedPulling="2025-10-25 09:31:43.125737427 +0000 UTC m=+20.620531515" lastFinishedPulling="2025-10-25 09:31:48.461143446 +0000 UTC m=+25.955937534" observedRunningTime="2025-10-25 09:31:48.85287516 +0000 UTC m=+26.347669256" watchObservedRunningTime="2025-10-25 09:31:53.878229601 +0000 UTC m=+31.373023689"
	Oct 25 09:31:54 old-k8s-version-881642 kubelet[774]: I1025 09:31:54.858365     774 scope.go:117] "RemoveContainer" containerID="46f9eaf860dd1682be7c874c140c123c6acd0b09817f0b777efc9b75e40fd409"
	Oct 25 09:31:54 old-k8s-version-881642 kubelet[774]: E1025 09:31:54.858759     774 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-wxj8r_kubernetes-dashboard(9b2d4511-c88f-448a-a3fa-07d4208a42ba)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-wxj8r" podUID="9b2d4511-c88f-448a-a3fa-07d4208a42ba"
	Oct 25 09:31:54 old-k8s-version-881642 kubelet[774]: I1025 09:31:54.859227     774 scope.go:117] "RemoveContainer" containerID="60baea785f6966f76ac00e4e95fd38c1b7caf6402c878143bf4196cd44485a1a"
	Oct 25 09:31:55 old-k8s-version-881642 kubelet[774]: I1025 09:31:55.861696     774 scope.go:117] "RemoveContainer" containerID="46f9eaf860dd1682be7c874c140c123c6acd0b09817f0b777efc9b75e40fd409"
	Oct 25 09:31:55 old-k8s-version-881642 kubelet[774]: E1025 09:31:55.862059     774 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-wxj8r_kubernetes-dashboard(9b2d4511-c88f-448a-a3fa-07d4208a42ba)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-wxj8r" podUID="9b2d4511-c88f-448a-a3fa-07d4208a42ba"
	Oct 25 09:31:59 old-k8s-version-881642 kubelet[774]: I1025 09:31:59.873169     774 scope.go:117] "RemoveContainer" containerID="623cbdf8fdad5fc331b92d416fd5175dab8d10e94570d3ee7297c237142d1782"
	Oct 25 09:32:03 old-k8s-version-881642 kubelet[774]: I1025 09:32:03.086955     774 scope.go:117] "RemoveContainer" containerID="46f9eaf860dd1682be7c874c140c123c6acd0b09817f0b777efc9b75e40fd409"
	Oct 25 09:32:03 old-k8s-version-881642 kubelet[774]: E1025 09:32:03.087269     774 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-wxj8r_kubernetes-dashboard(9b2d4511-c88f-448a-a3fa-07d4208a42ba)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-wxj8r" podUID="9b2d4511-c88f-448a-a3fa-07d4208a42ba"
	Oct 25 09:32:13 old-k8s-version-881642 kubelet[774]: I1025 09:32:13.824456     774 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 25 09:32:13 old-k8s-version-881642 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 09:32:13 old-k8s-version-881642 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 09:32:13 old-k8s-version-881642 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [a5d14e1a4096f529878bd9efaddeb716003abd783455c1252932adfdb39b3bd1] <==
	2025/10/25 09:31:48 Starting overwatch
	2025/10/25 09:31:48 Using namespace: kubernetes-dashboard
	2025/10/25 09:31:48 Using in-cluster config to connect to apiserver
	2025/10/25 09:31:48 Using secret token for csrf signing
	2025/10/25 09:31:48 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/25 09:31:48 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/25 09:31:48 Successful initial request to the apiserver, version: v1.28.0
	2025/10/25 09:31:48 Generating JWE encryption key
	2025/10/25 09:31:48 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/25 09:31:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/25 09:31:49 Initializing JWE encryption key from synchronized object
	2025/10/25 09:31:49 Creating in-cluster Sidecar client
	2025/10/25 09:31:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 09:31:49 Serving insecurely on HTTP port: 9090
	
	
	==> storage-provisioner [50da267439922b6c988559a9dca21888ddb2a1baa51d6996e715b7ac71d5086b] <==
	I1025 09:31:59.924735       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 09:31:59.937640       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 09:31:59.937769       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1025 09:32:17.338997       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 09:32:17.339193       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-881642_a5dd512a-be83-4dd1-8d7e-da38160bd050!
	I1025 09:32:17.340066       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c5202046-721c-4659-94ff-20871396397e", APIVersion:"v1", ResourceVersion:"650", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-881642_a5dd512a-be83-4dd1-8d7e-da38160bd050 became leader
	I1025 09:32:17.439650       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-881642_a5dd512a-be83-4dd1-8d7e-da38160bd050!
	
	
	==> storage-provisioner [623cbdf8fdad5fc331b92d416fd5175dab8d10e94570d3ee7297c237142d1782] <==
	I1025 09:31:29.552161       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1025 09:31:59.553867       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-881642 -n old-k8s-version-881642
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-881642 -n old-k8s-version-881642: exit status 2 (379.36894ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-881642 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-881642
helpers_test.go:243: (dbg) docker inspect old-k8s-version-881642:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e27d1cd7e425155b8d954cdf2863c1fb9aa2eb9c8de7b81cc7d6ed7da8c49306",
	        "Created": "2025-10-25T09:29:58.440367349Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 186247,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:31:15.675289468Z",
	            "FinishedAt": "2025-10-25T09:31:14.774807582Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/e27d1cd7e425155b8d954cdf2863c1fb9aa2eb9c8de7b81cc7d6ed7da8c49306/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e27d1cd7e425155b8d954cdf2863c1fb9aa2eb9c8de7b81cc7d6ed7da8c49306/hostname",
	        "HostsPath": "/var/lib/docker/containers/e27d1cd7e425155b8d954cdf2863c1fb9aa2eb9c8de7b81cc7d6ed7da8c49306/hosts",
	        "LogPath": "/var/lib/docker/containers/e27d1cd7e425155b8d954cdf2863c1fb9aa2eb9c8de7b81cc7d6ed7da8c49306/e27d1cd7e425155b8d954cdf2863c1fb9aa2eb9c8de7b81cc7d6ed7da8c49306-json.log",
	        "Name": "/old-k8s-version-881642",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-881642:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-881642",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e27d1cd7e425155b8d954cdf2863c1fb9aa2eb9c8de7b81cc7d6ed7da8c49306",
	                "LowerDir": "/var/lib/docker/overlay2/6df03c5857d4595a180f3a88a7c703bebe35718ada3a63bcd7f20b5908953f91-init/diff:/var/lib/docker/overlay2/cef79fc16ca3e257f9c9390d34fae091a7f7fa219913b2606caf1922ea50ed93/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6df03c5857d4595a180f3a88a7c703bebe35718ada3a63bcd7f20b5908953f91/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6df03c5857d4595a180f3a88a7c703bebe35718ada3a63bcd7f20b5908953f91/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6df03c5857d4595a180f3a88a7c703bebe35718ada3a63bcd7f20b5908953f91/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-881642",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-881642/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-881642",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-881642",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-881642",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6f1c6d478cdefda7536d1e6673bd6842a51450e921bb065a381a45dd68cbb080",
	            "SandboxKey": "/var/run/docker/netns/6f1c6d478cde",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33053"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33054"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33057"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33055"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33056"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-881642": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:08:12:4a:f3:4b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "917e141362c271bc727ce35937091cd630c8eec9e1077a440c52d3089c688f49",
	                    "EndpointID": "55aa1d841f7915bc9e563feb2ddc033849e0129b5ac9a47977e3f64d78f53a7e",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-881642",
	                        "e27d1cd7e425"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-881642 -n old-k8s-version-881642
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-881642 -n old-k8s-version-881642: exit status 2 (348.069597ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-881642 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-881642 logs -n 25: (1.323470085s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-068349 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-068349             │ jenkins │ v1.37.0 │ 25 Oct 25 09:28 UTC │                     │
	│ ssh     │ -p cilium-068349 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-068349             │ jenkins │ v1.37.0 │ 25 Oct 25 09:28 UTC │                     │
	│ ssh     │ -p cilium-068349 sudo containerd config dump                                                                                                                                                                                                  │ cilium-068349             │ jenkins │ v1.37.0 │ 25 Oct 25 09:28 UTC │                     │
	│ ssh     │ -p cilium-068349 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-068349             │ jenkins │ v1.37.0 │ 25 Oct 25 09:28 UTC │                     │
	│ ssh     │ -p cilium-068349 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-068349             │ jenkins │ v1.37.0 │ 25 Oct 25 09:28 UTC │                     │
	│ ssh     │ -p cilium-068349 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-068349             │ jenkins │ v1.37.0 │ 25 Oct 25 09:28 UTC │                     │
	│ ssh     │ -p cilium-068349 sudo crio config                                                                                                                                                                                                             │ cilium-068349             │ jenkins │ v1.37.0 │ 25 Oct 25 09:28 UTC │                     │
	│ delete  │ -p cilium-068349                                                                                                                                                                                                                              │ cilium-068349             │ jenkins │ v1.37.0 │ 25 Oct 25 09:28 UTC │ 25 Oct 25 09:28 UTC │
	│ start   │ -p force-systemd-env-991333 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-991333  │ jenkins │ v1.37.0 │ 25 Oct 25 09:28 UTC │ 25 Oct 25 09:29 UTC │
	│ ssh     │ force-systemd-flag-100847 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-100847 │ jenkins │ v1.37.0 │ 25 Oct 25 09:28 UTC │ 25 Oct 25 09:28 UTC │
	│ delete  │ -p force-systemd-flag-100847                                                                                                                                                                                                                  │ force-systemd-flag-100847 │ jenkins │ v1.37.0 │ 25 Oct 25 09:28 UTC │ 25 Oct 25 09:28 UTC │
	│ start   │ -p cert-expiration-440252 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-440252    │ jenkins │ v1.37.0 │ 25 Oct 25 09:28 UTC │ 25 Oct 25 09:29 UTC │
	│ delete  │ -p force-systemd-env-991333                                                                                                                                                                                                                   │ force-systemd-env-991333  │ jenkins │ v1.37.0 │ 25 Oct 25 09:29 UTC │ 25 Oct 25 09:29 UTC │
	│ start   │ -p cert-options-483456 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-483456       │ jenkins │ v1.37.0 │ 25 Oct 25 09:29 UTC │ 25 Oct 25 09:29 UTC │
	│ ssh     │ cert-options-483456 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-483456       │ jenkins │ v1.37.0 │ 25 Oct 25 09:29 UTC │ 25 Oct 25 09:29 UTC │
	│ ssh     │ -p cert-options-483456 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-483456       │ jenkins │ v1.37.0 │ 25 Oct 25 09:29 UTC │ 25 Oct 25 09:29 UTC │
	│ delete  │ -p cert-options-483456                                                                                                                                                                                                                        │ cert-options-483456       │ jenkins │ v1.37.0 │ 25 Oct 25 09:29 UTC │ 25 Oct 25 09:29 UTC │
	│ start   │ -p old-k8s-version-881642 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-881642    │ jenkins │ v1.37.0 │ 25 Oct 25 09:29 UTC │ 25 Oct 25 09:30 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-881642 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-881642    │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │                     │
	│ stop    │ -p old-k8s-version-881642 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-881642    │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │ 25 Oct 25 09:31 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-881642 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-881642    │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │ 25 Oct 25 09:31 UTC │
	│ start   │ -p old-k8s-version-881642 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-881642    │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │ 25 Oct 25 09:32 UTC │
	│ image   │ old-k8s-version-881642 image list --format=json                                                                                                                                                                                               │ old-k8s-version-881642    │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:32 UTC │
	│ pause   │ -p old-k8s-version-881642 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-881642    │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │                     │
	│ start   │ -p cert-expiration-440252 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-440252    │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:32:14
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:32:14.783601  188476 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:32:14.783706  188476 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:32:14.783709  188476 out.go:374] Setting ErrFile to fd 2...
	I1025 09:32:14.783714  188476 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:32:14.784044  188476 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-2312/.minikube/bin
	I1025 09:32:14.784470  188476 out.go:368] Setting JSON to false
	I1025 09:32:14.785664  188476 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4486,"bootTime":1761380249,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1025 09:32:14.785719  188476 start.go:141] virtualization:  
	I1025 09:32:14.789350  188476 out.go:179] * [cert-expiration-440252] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 09:32:14.792490  188476 notify.go:220] Checking for updates...
	I1025 09:32:14.796375  188476 out.go:179]   - MINIKUBE_LOCATION=21796
	I1025 09:32:14.799321  188476 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:32:14.802164  188476 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21796-2312/kubeconfig
	I1025 09:32:14.805012  188476 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-2312/.minikube
	I1025 09:32:14.807928  188476 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 09:32:14.810874  188476 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:32:14.814169  188476 config.go:182] Loaded profile config "cert-expiration-440252": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:32:14.814701  188476 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:32:14.849936  188476 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 09:32:14.850096  188476 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:32:14.908083  188476 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-25 09:32:14.898124451 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:32:14.908193  188476 docker.go:318] overlay module found
	I1025 09:32:14.911272  188476 out.go:179] * Using the docker driver based on existing profile
	I1025 09:32:14.914183  188476 start.go:305] selected driver: docker
	I1025 09:32:14.914191  188476 start.go:925] validating driver "docker" against &{Name:cert-expiration-440252 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-440252 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:32:14.914326  188476 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:32:14.915047  188476 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:32:14.980373  188476 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-25 09:32:14.963546874 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:32:14.980791  188476 cni.go:84] Creating CNI manager for ""
	I1025 09:32:14.980883  188476 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:32:14.980923  188476 start.go:349] cluster config:
	{Name:cert-expiration-440252 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-440252 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1025 09:32:14.984772  188476 out.go:179] * Starting "cert-expiration-440252" primary control-plane node in "cert-expiration-440252" cluster
	I1025 09:32:14.988053  188476 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 09:32:14.991108  188476 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:32:14.994777  188476 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:32:14.994847  188476 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21796-2312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1025 09:32:14.994855  188476 cache.go:58] Caching tarball of preloaded images
	I1025 09:32:14.994988  188476 preload.go:233] Found /home/jenkins/minikube-integration/21796-2312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 09:32:14.995004  188476 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 09:32:14.995128  188476 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/cert-expiration-440252/config.json ...
	I1025 09:32:14.995408  188476 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:32:15.045949  188476 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 09:32:15.045962  188476 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 09:32:15.045977  188476 cache.go:232] Successfully downloaded all kic artifacts
	I1025 09:32:15.046032  188476 start.go:360] acquireMachinesLock for cert-expiration-440252: {Name:mkce563f9d4415fd837f6909883a2b22117c71eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:32:15.046094  188476 start.go:364] duration metric: took 44.415µs to acquireMachinesLock for "cert-expiration-440252"
	I1025 09:32:15.046115  188476 start.go:96] Skipping create...Using existing machine configuration
	I1025 09:32:15.046119  188476 fix.go:54] fixHost starting: 
	I1025 09:32:15.046431  188476 cli_runner.go:164] Run: docker container inspect cert-expiration-440252 --format={{.State.Status}}
	I1025 09:32:15.080273  188476 fix.go:112] recreateIfNeeded on cert-expiration-440252: state=Running err=<nil>
	W1025 09:32:15.080303  188476 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Oct 25 09:31:54 old-k8s-version-881642 crio[649]: time="2025-10-25T09:31:54.880469279Z" level=info msg="Removed container 60baea785f6966f76ac00e4e95fd38c1b7caf6402c878143bf4196cd44485a1a: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-wxj8r/dashboard-metrics-scraper" id=cf3f0c89-fcbd-4a05-aab0-7d994d9ff509 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 09:31:59 old-k8s-version-881642 conmon[1147]: conmon 623cbdf8fdad5fc331b9 <ninfo>: container 1156 exited with status 1
	Oct 25 09:31:59 old-k8s-version-881642 crio[649]: time="2025-10-25T09:31:59.87407615Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=cf8b7e32-1406-4577-bddc-6b62c6a2dee3 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:31:59 old-k8s-version-881642 crio[649]: time="2025-10-25T09:31:59.875388652Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=82a5a00a-a060-4f2d-ba93-442b8c69965a name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:31:59 old-k8s-version-881642 crio[649]: time="2025-10-25T09:31:59.877922373Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=e066f016-641c-4fc8-99a6-e7eb048e3dd6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:31:59 old-k8s-version-881642 crio[649]: time="2025-10-25T09:31:59.878055635Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:31:59 old-k8s-version-881642 crio[649]: time="2025-10-25T09:31:59.886399692Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:31:59 old-k8s-version-881642 crio[649]: time="2025-10-25T09:31:59.886581455Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/fbcad0923bbe3ed9d3be4c7a93fdf1285f3116c5d4017d60069408c4bcab5e7c/merged/etc/passwd: no such file or directory"
	Oct 25 09:31:59 old-k8s-version-881642 crio[649]: time="2025-10-25T09:31:59.886610452Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/fbcad0923bbe3ed9d3be4c7a93fdf1285f3116c5d4017d60069408c4bcab5e7c/merged/etc/group: no such file or directory"
	Oct 25 09:31:59 old-k8s-version-881642 crio[649]: time="2025-10-25T09:31:59.886877434Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:31:59 old-k8s-version-881642 crio[649]: time="2025-10-25T09:31:59.902927198Z" level=info msg="Created container 50da267439922b6c988559a9dca21888ddb2a1baa51d6996e715b7ac71d5086b: kube-system/storage-provisioner/storage-provisioner" id=e066f016-641c-4fc8-99a6-e7eb048e3dd6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:31:59 old-k8s-version-881642 crio[649]: time="2025-10-25T09:31:59.90468687Z" level=info msg="Starting container: 50da267439922b6c988559a9dca21888ddb2a1baa51d6996e715b7ac71d5086b" id=9926cf12-3033-4b69-aa1b-d48fcfce7b14 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:31:59 old-k8s-version-881642 crio[649]: time="2025-10-25T09:31:59.906341637Z" level=info msg="Started container" PID=1625 containerID=50da267439922b6c988559a9dca21888ddb2a1baa51d6996e715b7ac71d5086b description=kube-system/storage-provisioner/storage-provisioner id=9926cf12-3033-4b69-aa1b-d48fcfce7b14 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f69fb813686bfddf7565ef61a42eeb8e604f564178fcc2d9b9b2edd932f94d5c
	Oct 25 09:32:09 old-k8s-version-881642 crio[649]: time="2025-10-25T09:32:09.470240716Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 09:32:09 old-k8s-version-881642 crio[649]: time="2025-10-25T09:32:09.478389373Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 09:32:09 old-k8s-version-881642 crio[649]: time="2025-10-25T09:32:09.478425681Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 09:32:09 old-k8s-version-881642 crio[649]: time="2025-10-25T09:32:09.478448024Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 09:32:09 old-k8s-version-881642 crio[649]: time="2025-10-25T09:32:09.481815718Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 09:32:09 old-k8s-version-881642 crio[649]: time="2025-10-25T09:32:09.481852518Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 09:32:09 old-k8s-version-881642 crio[649]: time="2025-10-25T09:32:09.481877807Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 09:32:09 old-k8s-version-881642 crio[649]: time="2025-10-25T09:32:09.485090421Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 09:32:09 old-k8s-version-881642 crio[649]: time="2025-10-25T09:32:09.485129593Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 09:32:09 old-k8s-version-881642 crio[649]: time="2025-10-25T09:32:09.485155735Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 09:32:09 old-k8s-version-881642 crio[649]: time="2025-10-25T09:32:09.488340984Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 09:32:09 old-k8s-version-881642 crio[649]: time="2025-10-25T09:32:09.488378884Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	50da267439922       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           19 seconds ago      Running             storage-provisioner         2                   f69fb813686bf       storage-provisioner                              kube-system
	46f9eaf860dd1       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           25 seconds ago      Exited              dashboard-metrics-scraper   1                   e635f838b8bb3       dashboard-metrics-scraper-5f989dc9cf-wxj8r       kubernetes-dashboard
	a5d14e1a4096f       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   30 seconds ago      Running             kubernetes-dashboard        0                   9cb9b374813ff       kubernetes-dashboard-8694d4445c-pvtmx            kubernetes-dashboard
	a3ae672359d1b       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           49 seconds ago      Running             kindnet-cni                 1                   adc4e7f0ab91f       kindnet-nvxh8                                    kube-system
	623cbdf8fdad5       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           49 seconds ago      Exited              storage-provisioner         1                   f69fb813686bf       storage-provisioner                              kube-system
	30c881c61d527       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           49 seconds ago      Running             coredns                     1                   4297a1634b9ea       coredns-5dd5756b68-jsvbf                         kube-system
	1b2d1e8a3322d       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           49 seconds ago      Running             busybox                     1                   09adb8189e13f       busybox                                          default
	1284d0247dd7a       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           50 seconds ago      Running             kube-proxy                  1                   4e2e08b99b31f       kube-proxy-6929r                                 kube-system
	a3e33b746dfb8       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           55 seconds ago      Running             kube-scheduler              1                   7e3ce22e39482       kube-scheduler-old-k8s-version-881642            kube-system
	0833e8f6388be       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           55 seconds ago      Running             etcd                        1                   ba81de30004f9       etcd-old-k8s-version-881642                      kube-system
	34dcf52af6e20       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           55 seconds ago      Running             kube-controller-manager     1                   901148e6711fe       kube-controller-manager-old-k8s-version-881642   kube-system
	b09862ddb71a1       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           55 seconds ago      Running             kube-apiserver              1                   60d48d2bd1916       kube-apiserver-old-k8s-version-881642            kube-system
	
	
	==> coredns [30c881c61d527a0af79c93cf6bee55ac98ef7e8770d668d6b4d4c88d7bd21c98] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:41768 - 57361 "HINFO IN 1582655585945343120.1432785718172594991. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024236058s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-881642
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-881642
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3de5ce1075e776fa55a26fe8396669cc53a4373
	                    minikube.k8s.io/name=old-k8s-version-881642
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_30_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:30:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-881642
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:32:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:31:59 +0000   Sat, 25 Oct 2025 09:30:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:31:59 +0000   Sat, 25 Oct 2025 09:30:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:31:59 +0000   Sat, 25 Oct 2025 09:30:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:31:59 +0000   Sat, 25 Oct 2025 09:30:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-881642
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                7da6e2e0-c6f7-4303-a7ca-65b12f9698fc
	  Boot ID:                    89aa6167-e700-462e-8969-7739a9f33dc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 coredns-5dd5756b68-jsvbf                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     105s
	  kube-system                 etcd-old-k8s-version-881642                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         117s
	  kube-system                 kindnet-nvxh8                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      105s
	  kube-system                 kube-apiserver-old-k8s-version-881642             250m (12%)    0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-controller-manager-old-k8s-version-881642    200m (10%)    0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-6929r                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-old-k8s-version-881642             100m (5%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-wxj8r        0 (0%)        0 (0%)      0 (0%)           0 (0%)         39s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-pvtmx             0 (0%)        0 (0%)      0 (0%)           0 (0%)         39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 104s                 kube-proxy       
	  Normal  Starting                 49s                  kube-proxy       
	  Normal  Starting                 2m6s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m6s (x8 over 2m6s)  kubelet          Node old-k8s-version-881642 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m6s (x8 over 2m6s)  kubelet          Node old-k8s-version-881642 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m6s (x8 over 2m6s)  kubelet          Node old-k8s-version-881642 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    118s                 kubelet          Node old-k8s-version-881642 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  118s                 kubelet          Node old-k8s-version-881642 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     118s                 kubelet          Node old-k8s-version-881642 status is now: NodeHasSufficientPID
	  Normal  Starting                 118s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           106s                 node-controller  Node old-k8s-version-881642 event: Registered Node old-k8s-version-881642 in Controller
	  Normal  NodeReady                91s                  kubelet          Node old-k8s-version-881642 status is now: NodeReady
	  Normal  Starting                 57s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 57s)    kubelet          Node old-k8s-version-881642 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 57s)    kubelet          Node old-k8s-version-881642 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 57s)    kubelet          Node old-k8s-version-881642 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           38s                  node-controller  Node old-k8s-version-881642 event: Registered Node old-k8s-version-881642 in Controller
	
	
	==> dmesg <==
	[Oct25 09:07] overlayfs: idmapped layers are currently not supported
	[Oct25 09:08] overlayfs: idmapped layers are currently not supported
	[Oct25 09:09] overlayfs: idmapped layers are currently not supported
	[Oct25 09:10] overlayfs: idmapped layers are currently not supported
	[Oct25 09:11] overlayfs: idmapped layers are currently not supported
	[Oct25 09:13] overlayfs: idmapped layers are currently not supported
	[ +18.632418] overlayfs: idmapped layers are currently not supported
	[ +13.271347] overlayfs: idmapped layers are currently not supported
	[Oct25 09:14] overlayfs: idmapped layers are currently not supported
	[Oct25 09:15] overlayfs: idmapped layers are currently not supported
	[ +40.253798] overlayfs: idmapped layers are currently not supported
	[Oct25 09:16] overlayfs: idmapped layers are currently not supported
	[Oct25 09:17] overlayfs: idmapped layers are currently not supported
	[Oct25 09:18] overlayfs: idmapped layers are currently not supported
	[  +9.191019] hrtimer: interrupt took 13462946 ns
	[Oct25 09:20] overlayfs: idmapped layers are currently not supported
	[Oct25 09:22] overlayfs: idmapped layers are currently not supported
	[Oct25 09:23] overlayfs: idmapped layers are currently not supported
	[Oct25 09:24] overlayfs: idmapped layers are currently not supported
	[Oct25 09:28] overlayfs: idmapped layers are currently not supported
	[ +37.283444] overlayfs: idmapped layers are currently not supported
	[Oct25 09:29] overlayfs: idmapped layers are currently not supported
	[ +38.328802] overlayfs: idmapped layers are currently not supported
	[Oct25 09:30] overlayfs: idmapped layers are currently not supported
	[Oct25 09:31] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [0833e8f6388be20dd4e350ab56c1e849bebd8eeaf139810589e6f67d6a733ec2] <==
	{"level":"info","ts":"2025-10-25T09:31:23.591365Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-25T09:31:23.591528Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-25T09:31:23.591312Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-10-25T09:31:23.591752Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-10-25T09:31:23.591937Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-25T09:31:23.59201Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-25T09:31:23.642069Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-25T09:31:23.642114Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-25T09:31:23.637101Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-25T09:31:23.645726Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-25T09:31:23.645679Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-25T09:31:25.051376Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-25T09:31:25.051489Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-25T09:31:25.051546Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-25T09:31:25.051586Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-10-25T09:31:25.051618Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-25T09:31:25.051659Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-10-25T09:31:25.051696Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-25T09:31:25.056125Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-881642 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-25T09:31:25.05636Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-25T09:31:25.05901Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-10-25T09:31:25.060647Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-25T09:31:25.06159Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-25T09:31:25.065492Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-25T09:31:25.065528Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 09:32:19 up  1:14,  0 user,  load average: 1.82, 2.55, 2.43
	Linux old-k8s-version-881642 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a3ae672359d1b3d92f50d907651b263b2eb1d131e95244ceb88afdbc80c44259] <==
	I1025 09:31:29.267184       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 09:31:29.314380       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1025 09:31:29.314530       1 main.go:148] setting mtu 1500 for CNI 
	I1025 09:31:29.314543       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 09:31:29.314557       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T09:31:29Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 09:31:29.469179       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 09:31:29.469196       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 09:31:29.469205       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 09:31:29.469866       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1025 09:31:59.469329       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1025 09:31:59.469443       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1025 09:31:59.470327       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1025 09:31:59.470340       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1025 09:32:01.069541       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 09:32:01.069570       1 metrics.go:72] Registering metrics
	I1025 09:32:01.069643       1 controller.go:711] "Syncing nftables rules"
	I1025 09:32:09.469787       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1025 09:32:09.469941       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b09862ddb71a1aa66ca53228ebbdef5cc24a02c13d6704bac8682ab09a4c81b9] <==
	I1025 09:31:28.372850       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1025 09:31:28.378934       1 aggregator.go:166] initial CRD sync complete...
	I1025 09:31:28.379141       1 autoregister_controller.go:141] Starting autoregister controller
	I1025 09:31:28.379181       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 09:31:28.379220       1 cache.go:39] Caches are synced for autoregister controller
	I1025 09:31:28.384010       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	E1025 09:31:28.395366       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1025 09:31:28.398357       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1025 09:31:28.398383       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1025 09:31:28.414589       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1025 09:31:28.472931       1 shared_informer.go:318] Caches are synced for configmaps
	I1025 09:31:28.473022       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1025 09:31:28.473064       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 09:31:28.487991       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1025 09:31:29.017311       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 09:31:30.287593       1 controller.go:624] quota admission added evaluator for: namespaces
	I1025 09:31:30.335953       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1025 09:31:30.379783       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 09:31:30.394887       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 09:31:30.413697       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1025 09:31:30.497670       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.30.155"}
	I1025 09:31:30.535689       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.0.105"}
	I1025 09:31:40.891837       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1025 09:31:40.899357       1 controller.go:624] quota admission added evaluator for: endpoints
	I1025 09:31:41.082989       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [34dcf52af6e206b8192338245011e7a8be2e48a9c59c3fdf1ca2a7d5abd47011] <==
	I1025 09:31:41.051601       1 taint_manager.go:211] "Sending events to api server"
	I1025 09:31:41.051675       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="old-k8s-version-881642"
	I1025 09:31:41.051749       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1025 09:31:41.052006       1 event.go:307] "Event occurred" object="old-k8s-version-881642" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-881642 event: Registered Node old-k8s-version-881642 in Controller"
	I1025 09:31:41.056070       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="66.405µs"
	I1025 09:31:41.062182       1 shared_informer.go:318] Caches are synced for daemon sets
	I1025 09:31:41.064248       1 shared_informer.go:318] Caches are synced for node
	I1025 09:31:41.064362       1 range_allocator.go:174] "Sending events to api server"
	I1025 09:31:41.064419       1 range_allocator.go:178] "Starting range CIDR allocator"
	I1025 09:31:41.064449       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I1025 09:31:41.064478       1 shared_informer.go:318] Caches are synced for cidrallocator
	I1025 09:31:41.070121       1 shared_informer.go:318] Caches are synced for attach detach
	I1025 09:31:41.070176       1 shared_informer.go:318] Caches are synced for GC
	I1025 09:31:41.086337       1 shared_informer.go:318] Caches are synced for TTL
	I1025 09:31:41.089929       1 shared_informer.go:318] Caches are synced for persistent volume
	I1025 09:31:41.456162       1 shared_informer.go:318] Caches are synced for garbage collector
	I1025 09:31:41.456194       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1025 09:31:41.471433       1 shared_informer.go:318] Caches are synced for garbage collector
	I1025 09:31:48.871755       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="17.767123ms"
	I1025 09:31:48.872002       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="89.61µs"
	I1025 09:31:53.877782       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="54.516µs"
	I1025 09:31:54.878947       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="64.698µs"
	I1025 09:31:55.875693       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="69.179µs"
	I1025 09:31:59.661960       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="10.863469ms"
	I1025 09:31:59.662199       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="60.661µs"
	
	
	==> kube-proxy [1284d0247dd7ad941ef70c0ac331a1f4338cfb58821a9120b1f3277898bb019b] <==
	I1025 09:31:29.633165       1 server_others.go:69] "Using iptables proxy"
	I1025 09:31:29.659239       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1025 09:31:29.765724       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:31:29.768050       1 server_others.go:152] "Using iptables Proxier"
	I1025 09:31:29.768089       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1025 09:31:29.768097       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1025 09:31:29.768121       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1025 09:31:29.769325       1 server.go:846] "Version info" version="v1.28.0"
	I1025 09:31:29.769346       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:31:29.770166       1 config.go:188] "Starting service config controller"
	I1025 09:31:29.770191       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1025 09:31:29.770209       1 config.go:97] "Starting endpoint slice config controller"
	I1025 09:31:29.770212       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1025 09:31:29.777139       1 config.go:315] "Starting node config controller"
	I1025 09:31:29.777165       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1025 09:31:29.870435       1 shared_informer.go:318] Caches are synced for service config
	I1025 09:31:29.870500       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1025 09:31:29.878238       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [a3e33b746dfb8a954389ce5444704f1ea524fab04b19741824ed405f60130162] <==
	I1025 09:31:27.422736       1 serving.go:348] Generated self-signed cert in-memory
	I1025 09:31:28.842995       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1025 09:31:28.843032       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:31:28.854536       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1025 09:31:28.854625       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1025 09:31:28.854559       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1025 09:31:28.854728       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1025 09:31:28.854594       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:31:28.859030       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1025 09:31:28.854610       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 09:31:28.859386       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1025 09:31:28.954873       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1025 09:31:28.959382       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1025 09:31:28.959425       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Oct 25 09:31:41 old-k8s-version-881642 kubelet[774]: I1025 09:31:41.091637     774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p57r\" (UniqueName: \"kubernetes.io/projected/9b2d4511-c88f-448a-a3fa-07d4208a42ba-kube-api-access-4p57r\") pod \"dashboard-metrics-scraper-5f989dc9cf-wxj8r\" (UID: \"9b2d4511-c88f-448a-a3fa-07d4208a42ba\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-wxj8r"
	Oct 25 09:31:41 old-k8s-version-881642 kubelet[774]: I1025 09:31:41.091812     774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86qsb\" (UniqueName: \"kubernetes.io/projected/72a9c952-6f92-4ca8-8bcb-dee91a24fd0c-kube-api-access-86qsb\") pod \"kubernetes-dashboard-8694d4445c-pvtmx\" (UID: \"72a9c952-6f92-4ca8-8bcb-dee91a24fd0c\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-pvtmx"
	Oct 25 09:31:41 old-k8s-version-881642 kubelet[774]: I1025 09:31:41.091940     774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/72a9c952-6f92-4ca8-8bcb-dee91a24fd0c-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-pvtmx\" (UID: \"72a9c952-6f92-4ca8-8bcb-dee91a24fd0c\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-pvtmx"
	Oct 25 09:31:42 old-k8s-version-881642 kubelet[774]: E1025 09:31:42.209041     774 projected.go:292] Couldn't get configMap kubernetes-dashboard/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Oct 25 09:31:42 old-k8s-version-881642 kubelet[774]: E1025 09:31:42.209685     774 projected.go:198] Error preparing data for projected volume kube-api-access-86qsb for pod kubernetes-dashboard/kubernetes-dashboard-8694d4445c-pvtmx: failed to sync configmap cache: timed out waiting for the condition
	Oct 25 09:31:42 old-k8s-version-881642 kubelet[774]: E1025 09:31:42.209818     774 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/72a9c952-6f92-4ca8-8bcb-dee91a24fd0c-kube-api-access-86qsb podName:72a9c952-6f92-4ca8-8bcb-dee91a24fd0c nodeName:}" failed. No retries permitted until 2025-10-25 09:31:42.709787887 +0000 UTC m=+20.204581975 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-86qsb" (UniqueName: "kubernetes.io/projected/72a9c952-6f92-4ca8-8bcb-dee91a24fd0c-kube-api-access-86qsb") pod "kubernetes-dashboard-8694d4445c-pvtmx" (UID: "72a9c952-6f92-4ca8-8bcb-dee91a24fd0c") : failed to sync configmap cache: timed out waiting for the condition
	Oct 25 09:31:42 old-k8s-version-881642 kubelet[774]: E1025 09:31:42.209047     774 projected.go:292] Couldn't get configMap kubernetes-dashboard/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Oct 25 09:31:42 old-k8s-version-881642 kubelet[774]: E1025 09:31:42.210091     774 projected.go:198] Error preparing data for projected volume kube-api-access-4p57r for pod kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-wxj8r: failed to sync configmap cache: timed out waiting for the condition
	Oct 25 09:31:42 old-k8s-version-881642 kubelet[774]: E1025 09:31:42.210236     774 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9b2d4511-c88f-448a-a3fa-07d4208a42ba-kube-api-access-4p57r podName:9b2d4511-c88f-448a-a3fa-07d4208a42ba nodeName:}" failed. No retries permitted until 2025-10-25 09:31:42.710215757 +0000 UTC m=+20.205009853 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4p57r" (UniqueName: "kubernetes.io/projected/9b2d4511-c88f-448a-a3fa-07d4208a42ba-kube-api-access-4p57r") pod "dashboard-metrics-scraper-5f989dc9cf-wxj8r" (UID: "9b2d4511-c88f-448a-a3fa-07d4208a42ba") : failed to sync configmap cache: timed out waiting for the condition
	Oct 25 09:31:43 old-k8s-version-881642 kubelet[774]: W1025 09:31:43.121235     774 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/e27d1cd7e425155b8d954cdf2863c1fb9aa2eb9c8de7b81cc7d6ed7da8c49306/crio-9cb9b374813fff7b22fbc9ba36edf6a8f8f5599c825027505443895c42645e68 WatchSource:0}: Error finding container 9cb9b374813fff7b22fbc9ba36edf6a8f8f5599c825027505443895c42645e68: Status 404 returned error can't find the container with id 9cb9b374813fff7b22fbc9ba36edf6a8f8f5599c825027505443895c42645e68
	Oct 25 09:31:43 old-k8s-version-881642 kubelet[774]: W1025 09:31:43.139430     774 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/e27d1cd7e425155b8d954cdf2863c1fb9aa2eb9c8de7b81cc7d6ed7da8c49306/crio-e635f838b8bb37427845c52a10a94689bad95e6ecace9eff4dfbcaa9d7e2fe02 WatchSource:0}: Error finding container e635f838b8bb37427845c52a10a94689bad95e6ecace9eff4dfbcaa9d7e2fe02: Status 404 returned error can't find the container with id e635f838b8bb37427845c52a10a94689bad95e6ecace9eff4dfbcaa9d7e2fe02
	Oct 25 09:31:53 old-k8s-version-881642 kubelet[774]: I1025 09:31:53.850072     774 scope.go:117] "RemoveContainer" containerID="60baea785f6966f76ac00e4e95fd38c1b7caf6402c878143bf4196cd44485a1a"
	Oct 25 09:31:53 old-k8s-version-881642 kubelet[774]: I1025 09:31:53.878289     774 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-pvtmx" podStartSLOduration=8.542823582 podCreationTimestamp="2025-10-25 09:31:40 +0000 UTC" firstStartedPulling="2025-10-25 09:31:43.125737427 +0000 UTC m=+20.620531515" lastFinishedPulling="2025-10-25 09:31:48.461143446 +0000 UTC m=+25.955937534" observedRunningTime="2025-10-25 09:31:48.85287516 +0000 UTC m=+26.347669256" watchObservedRunningTime="2025-10-25 09:31:53.878229601 +0000 UTC m=+31.373023689"
	Oct 25 09:31:54 old-k8s-version-881642 kubelet[774]: I1025 09:31:54.858365     774 scope.go:117] "RemoveContainer" containerID="46f9eaf860dd1682be7c874c140c123c6acd0b09817f0b777efc9b75e40fd409"
	Oct 25 09:31:54 old-k8s-version-881642 kubelet[774]: E1025 09:31:54.858759     774 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-wxj8r_kubernetes-dashboard(9b2d4511-c88f-448a-a3fa-07d4208a42ba)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-wxj8r" podUID="9b2d4511-c88f-448a-a3fa-07d4208a42ba"
	Oct 25 09:31:54 old-k8s-version-881642 kubelet[774]: I1025 09:31:54.859227     774 scope.go:117] "RemoveContainer" containerID="60baea785f6966f76ac00e4e95fd38c1b7caf6402c878143bf4196cd44485a1a"
	Oct 25 09:31:55 old-k8s-version-881642 kubelet[774]: I1025 09:31:55.861696     774 scope.go:117] "RemoveContainer" containerID="46f9eaf860dd1682be7c874c140c123c6acd0b09817f0b777efc9b75e40fd409"
	Oct 25 09:31:55 old-k8s-version-881642 kubelet[774]: E1025 09:31:55.862059     774 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-wxj8r_kubernetes-dashboard(9b2d4511-c88f-448a-a3fa-07d4208a42ba)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-wxj8r" podUID="9b2d4511-c88f-448a-a3fa-07d4208a42ba"
	Oct 25 09:31:59 old-k8s-version-881642 kubelet[774]: I1025 09:31:59.873169     774 scope.go:117] "RemoveContainer" containerID="623cbdf8fdad5fc331b92d416fd5175dab8d10e94570d3ee7297c237142d1782"
	Oct 25 09:32:03 old-k8s-version-881642 kubelet[774]: I1025 09:32:03.086955     774 scope.go:117] "RemoveContainer" containerID="46f9eaf860dd1682be7c874c140c123c6acd0b09817f0b777efc9b75e40fd409"
	Oct 25 09:32:03 old-k8s-version-881642 kubelet[774]: E1025 09:32:03.087269     774 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-wxj8r_kubernetes-dashboard(9b2d4511-c88f-448a-a3fa-07d4208a42ba)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-wxj8r" podUID="9b2d4511-c88f-448a-a3fa-07d4208a42ba"
	Oct 25 09:32:13 old-k8s-version-881642 kubelet[774]: I1025 09:32:13.824456     774 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 25 09:32:13 old-k8s-version-881642 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 09:32:13 old-k8s-version-881642 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 09:32:13 old-k8s-version-881642 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [a5d14e1a4096f529878bd9efaddeb716003abd783455c1252932adfdb39b3bd1] <==
	2025/10/25 09:31:48 Using namespace: kubernetes-dashboard
	2025/10/25 09:31:48 Using in-cluster config to connect to apiserver
	2025/10/25 09:31:48 Using secret token for csrf signing
	2025/10/25 09:31:48 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/25 09:31:48 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/25 09:31:48 Successful initial request to the apiserver, version: v1.28.0
	2025/10/25 09:31:48 Generating JWE encryption key
	2025/10/25 09:31:48 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/25 09:31:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/25 09:31:49 Initializing JWE encryption key from synchronized object
	2025/10/25 09:31:49 Creating in-cluster Sidecar client
	2025/10/25 09:31:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 09:31:49 Serving insecurely on HTTP port: 9090
	2025/10/25 09:32:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 09:31:48 Starting overwatch
	
	
	==> storage-provisioner [50da267439922b6c988559a9dca21888ddb2a1baa51d6996e715b7ac71d5086b] <==
	I1025 09:31:59.924735       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 09:31:59.937640       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 09:31:59.937769       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1025 09:32:17.338997       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 09:32:17.339193       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-881642_a5dd512a-be83-4dd1-8d7e-da38160bd050!
	I1025 09:32:17.340066       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c5202046-721c-4659-94ff-20871396397e", APIVersion:"v1", ResourceVersion:"650", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-881642_a5dd512a-be83-4dd1-8d7e-da38160bd050 became leader
	I1025 09:32:17.439650       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-881642_a5dd512a-be83-4dd1-8d7e-da38160bd050!
	
	
	==> storage-provisioner [623cbdf8fdad5fc331b92d416fd5175dab8d10e94570d3ee7297c237142d1782] <==
	I1025 09:31:29.552161       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1025 09:31:59.553867       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-881642 -n old-k8s-version-881642
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-881642 -n old-k8s-version-881642: exit status 2 (407.911735ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-881642 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-179869 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-179869 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (303.291228ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:33:52Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-179869 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-179869 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-179869 describe deploy/metrics-server -n kube-system: exit status 1 (84.270233ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-179869 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-179869
helpers_test.go:243: (dbg) docker inspect no-preload-179869:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "021c28390d46e140cb149a266cb60178228c9cf40f85185de0d05f970fd0ddea",
	        "Created": "2025-10-25T09:32:25.431032619Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 190673,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:32:25.781898586Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/021c28390d46e140cb149a266cb60178228c9cf40f85185de0d05f970fd0ddea/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/021c28390d46e140cb149a266cb60178228c9cf40f85185de0d05f970fd0ddea/hostname",
	        "HostsPath": "/var/lib/docker/containers/021c28390d46e140cb149a266cb60178228c9cf40f85185de0d05f970fd0ddea/hosts",
	        "LogPath": "/var/lib/docker/containers/021c28390d46e140cb149a266cb60178228c9cf40f85185de0d05f970fd0ddea/021c28390d46e140cb149a266cb60178228c9cf40f85185de0d05f970fd0ddea-json.log",
	        "Name": "/no-preload-179869",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-179869:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-179869",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "021c28390d46e140cb149a266cb60178228c9cf40f85185de0d05f970fd0ddea",
	                "LowerDir": "/var/lib/docker/overlay2/81e00092661e10c44ffb145286208642057ce877d4a86b73f561cb203e788f89-init/diff:/var/lib/docker/overlay2/cef79fc16ca3e257f9c9390d34fae091a7f7fa219913b2606caf1922ea50ed93/diff",
	                "MergedDir": "/var/lib/docker/overlay2/81e00092661e10c44ffb145286208642057ce877d4a86b73f561cb203e788f89/merged",
	                "UpperDir": "/var/lib/docker/overlay2/81e00092661e10c44ffb145286208642057ce877d4a86b73f561cb203e788f89/diff",
	                "WorkDir": "/var/lib/docker/overlay2/81e00092661e10c44ffb145286208642057ce877d4a86b73f561cb203e788f89/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-179869",
	                "Source": "/var/lib/docker/volumes/no-preload-179869/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-179869",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-179869",
	                "name.minikube.sigs.k8s.io": "no-preload-179869",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7b0a7429f822cc953bcf581081a7f251cd835add321cc5e4d52c80625657391f",
	            "SandboxKey": "/var/run/docker/netns/7b0a7429f822",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33058"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33059"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33062"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33060"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33061"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-179869": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "36:b2:7a:61:18:38",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ff99d2418ad390d8ccdf5911c4bca3c6d1626ffae4866e35866344c13c51df93",
	                    "EndpointID": "635c2c683e2ac6af10c8e5bbb27e5cc7a9d1ed42dadac92ab45f94b0895545f9",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-179869",
	                        "021c28390d46"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-179869 -n no-preload-179869
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-179869 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-179869 logs -n 25: (1.276837499s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-068349 sudo crio config                                                                                                                                                                                                             │ cilium-068349             │ jenkins │ v1.37.0 │ 25 Oct 25 09:28 UTC │                     │
	│ delete  │ -p cilium-068349                                                                                                                                                                                                                              │ cilium-068349             │ jenkins │ v1.37.0 │ 25 Oct 25 09:28 UTC │ 25 Oct 25 09:28 UTC │
	│ start   │ -p force-systemd-env-991333 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-991333  │ jenkins │ v1.37.0 │ 25 Oct 25 09:28 UTC │ 25 Oct 25 09:29 UTC │
	│ ssh     │ force-systemd-flag-100847 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-100847 │ jenkins │ v1.37.0 │ 25 Oct 25 09:28 UTC │ 25 Oct 25 09:28 UTC │
	│ delete  │ -p force-systemd-flag-100847                                                                                                                                                                                                                  │ force-systemd-flag-100847 │ jenkins │ v1.37.0 │ 25 Oct 25 09:28 UTC │ 25 Oct 25 09:28 UTC │
	│ start   │ -p cert-expiration-440252 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-440252    │ jenkins │ v1.37.0 │ 25 Oct 25 09:28 UTC │ 25 Oct 25 09:29 UTC │
	│ delete  │ -p force-systemd-env-991333                                                                                                                                                                                                                   │ force-systemd-env-991333  │ jenkins │ v1.37.0 │ 25 Oct 25 09:29 UTC │ 25 Oct 25 09:29 UTC │
	│ start   │ -p cert-options-483456 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-483456       │ jenkins │ v1.37.0 │ 25 Oct 25 09:29 UTC │ 25 Oct 25 09:29 UTC │
	│ ssh     │ cert-options-483456 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-483456       │ jenkins │ v1.37.0 │ 25 Oct 25 09:29 UTC │ 25 Oct 25 09:29 UTC │
	│ ssh     │ -p cert-options-483456 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-483456       │ jenkins │ v1.37.0 │ 25 Oct 25 09:29 UTC │ 25 Oct 25 09:29 UTC │
	│ delete  │ -p cert-options-483456                                                                                                                                                                                                                        │ cert-options-483456       │ jenkins │ v1.37.0 │ 25 Oct 25 09:29 UTC │ 25 Oct 25 09:29 UTC │
	│ start   │ -p old-k8s-version-881642 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-881642    │ jenkins │ v1.37.0 │ 25 Oct 25 09:29 UTC │ 25 Oct 25 09:30 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-881642 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-881642    │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │                     │
	│ stop    │ -p old-k8s-version-881642 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-881642    │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │ 25 Oct 25 09:31 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-881642 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-881642    │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │ 25 Oct 25 09:31 UTC │
	│ start   │ -p old-k8s-version-881642 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-881642    │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │ 25 Oct 25 09:32 UTC │
	│ image   │ old-k8s-version-881642 image list --format=json                                                                                                                                                                                               │ old-k8s-version-881642    │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:32 UTC │
	│ pause   │ -p old-k8s-version-881642 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-881642    │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │                     │
	│ start   │ -p cert-expiration-440252 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-440252    │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:32 UTC │
	│ delete  │ -p old-k8s-version-881642                                                                                                                                                                                                                     │ old-k8s-version-881642    │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:32 UTC │
	│ delete  │ -p old-k8s-version-881642                                                                                                                                                                                                                     │ old-k8s-version-881642    │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:32 UTC │
	│ start   │ -p no-preload-179869 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-179869         │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:33 UTC │
	│ delete  │ -p cert-expiration-440252                                                                                                                                                                                                                     │ cert-expiration-440252    │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:32 UTC │
	│ start   │ -p embed-certs-173264 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-173264        │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-179869 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-179869         │ jenkins │ v1.37.0 │ 25 Oct 25 09:33 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:32:40
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:32:40.943418  193057 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:32:40.943528  193057 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:32:40.943580  193057 out.go:374] Setting ErrFile to fd 2...
	I1025 09:32:40.943584  193057 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:32:40.943858  193057 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-2312/.minikube/bin
	I1025 09:32:40.944321  193057 out.go:368] Setting JSON to false
	I1025 09:32:40.945348  193057 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4512,"bootTime":1761380249,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1025 09:32:40.945412  193057 start.go:141] virtualization:  
	I1025 09:32:40.953418  193057 out.go:179] * [embed-certs-173264] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 09:32:40.957160  193057 out.go:179]   - MINIKUBE_LOCATION=21796
	I1025 09:32:40.957285  193057 notify.go:220] Checking for updates...
	I1025 09:32:40.964375  193057 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:32:40.967677  193057 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21796-2312/kubeconfig
	I1025 09:32:40.970862  193057 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-2312/.minikube
	I1025 09:32:40.974075  193057 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 09:32:40.977230  193057 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:32:40.980907  193057 config.go:182] Loaded profile config "no-preload-179869": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:32:40.981005  193057 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:32:41.005338  193057 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 09:32:41.005488  193057 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:32:41.111752  193057 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:68 SystemTime:2025-10-25 09:32:41.101948558 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:32:41.111855  193057 docker.go:318] overlay module found
	I1025 09:32:41.115106  193057 out.go:179] * Using the docker driver based on user configuration
	I1025 09:32:41.118117  193057 start.go:305] selected driver: docker
	I1025 09:32:41.118132  193057 start.go:925] validating driver "docker" against <nil>
	I1025 09:32:41.118146  193057 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:32:41.118819  193057 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:32:41.202647  193057 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:68 SystemTime:2025-10-25 09:32:41.193248856 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:32:41.202804  193057 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 09:32:41.203021  193057 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:32:41.206052  193057 out.go:179] * Using Docker driver with root privileges
	I1025 09:32:41.212173  193057 cni.go:84] Creating CNI manager for ""
	I1025 09:32:41.212259  193057 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:32:41.212272  193057 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 09:32:41.212350  193057 start.go:349] cluster config:
	{Name:embed-certs-173264 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-173264 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:32:41.215623  193057 out.go:179] * Starting "embed-certs-173264" primary control-plane node in "embed-certs-173264" cluster
	I1025 09:32:41.218618  193057 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 09:32:41.221408  193057 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:32:41.224445  193057 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:32:41.224513  193057 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21796-2312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1025 09:32:41.224527  193057 cache.go:58] Caching tarball of preloaded images
	I1025 09:32:41.224538  193057 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:32:41.224606  193057 preload.go:233] Found /home/jenkins/minikube-integration/21796-2312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 09:32:41.224617  193057 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 09:32:41.224718  193057 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/embed-certs-173264/config.json ...
	I1025 09:32:41.224734  193057 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/embed-certs-173264/config.json: {Name:mk97224c4995df438cd7fbaa7867b99bdf439201 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:32:41.247234  193057 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 09:32:41.247262  193057 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 09:32:41.247280  193057 cache.go:232] Successfully downloaded all kic artifacts
	I1025 09:32:41.247305  193057 start.go:360] acquireMachinesLock for embed-certs-173264: {Name:mke81dcd321ea4fd5503be9a5895c5ebc5dee6a7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:32:41.247415  193057 start.go:364] duration metric: took 90.693µs to acquireMachinesLock for "embed-certs-173264"
	I1025 09:32:41.247445  193057 start.go:93] Provisioning new machine with config: &{Name:embed-certs-173264 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-173264 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:32:41.247512  193057 start.go:125] createHost starting for "" (driver="docker")
	I1025 09:32:39.741047  190179 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (2.50245546s)
	I1025 09:32:39.741073  190179 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21796-2312/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1025 09:32:39.741089  190179 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1025 09:32:39.741138  190179 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1025 09:32:39.741204  190179 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.502796346s)
	I1025 09:32:39.741233  190179 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21796-2312/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1025 09:32:39.741299  190179 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1025 09:32:41.312052  190179 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.57088779s)
	I1025 09:32:41.312086  190179 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21796-2312/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1025 09:32:41.312103  190179 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1025 09:32:41.312150  190179 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1025 09:32:41.312255  190179 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.570940558s)
	I1025 09:32:41.312276  190179 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1025 09:32:41.312292  190179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1025 09:32:43.797825  190179 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (2.485648441s)
	I1025 09:32:43.797848  190179 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21796-2312/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1025 09:32:43.797865  190179 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1025 09:32:43.797913  190179 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1025 09:32:41.250975  193057 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1025 09:32:41.251296  193057 start.go:159] libmachine.API.Create for "embed-certs-173264" (driver="docker")
	I1025 09:32:41.251347  193057 client.go:168] LocalClient.Create starting
	I1025 09:32:41.251432  193057 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem
	I1025 09:32:41.251473  193057 main.go:141] libmachine: Decoding PEM data...
	I1025 09:32:41.251491  193057 main.go:141] libmachine: Parsing certificate...
	I1025 09:32:41.251567  193057 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem
	I1025 09:32:41.251591  193057 main.go:141] libmachine: Decoding PEM data...
	I1025 09:32:41.251607  193057 main.go:141] libmachine: Parsing certificate...
	I1025 09:32:41.251978  193057 cli_runner.go:164] Run: docker network inspect embed-certs-173264 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 09:32:41.268337  193057 cli_runner.go:211] docker network inspect embed-certs-173264 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 09:32:41.268415  193057 network_create.go:284] running [docker network inspect embed-certs-173264] to gather additional debugging logs...
	I1025 09:32:41.268434  193057 cli_runner.go:164] Run: docker network inspect embed-certs-173264
	W1025 09:32:41.296309  193057 cli_runner.go:211] docker network inspect embed-certs-173264 returned with exit code 1
	I1025 09:32:41.296338  193057 network_create.go:287] error running [docker network inspect embed-certs-173264]: docker network inspect embed-certs-173264: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-173264 not found
	I1025 09:32:41.296351  193057 network_create.go:289] output of [docker network inspect embed-certs-173264]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-173264 not found
	
	** /stderr **
	I1025 09:32:41.296462  193057 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:32:41.330738  193057 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-4076b76bdd01 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5e:93:ad:e4:3e:11} reservation:<nil>}
	I1025 09:32:41.331070  193057 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ab40ae949743 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ea:83:23:78:ca:4d} reservation:<nil>}
	I1025 09:32:41.331395  193057 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ff3fdd90dcc2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8e:d4:a3:43:c3:da} reservation:<nil>}
	I1025 09:32:41.331642  193057 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-ff99d2418ad3 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ae:82:a5:6b:3f:27} reservation:<nil>}
	I1025 09:32:41.332028  193057 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a4c7a0}
	I1025 09:32:41.332063  193057 network_create.go:124] attempt to create docker network embed-certs-173264 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1025 09:32:41.332116  193057 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-173264 embed-certs-173264
	I1025 09:32:41.418704  193057 network_create.go:108] docker network embed-certs-173264 192.168.85.0/24 created
	I1025 09:32:41.418738  193057 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-173264" container
	I1025 09:32:41.418824  193057 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 09:32:41.449896  193057 cli_runner.go:164] Run: docker volume create embed-certs-173264 --label name.minikube.sigs.k8s.io=embed-certs-173264 --label created_by.minikube.sigs.k8s.io=true
	I1025 09:32:41.479250  193057 oci.go:103] Successfully created a docker volume embed-certs-173264
	I1025 09:32:41.479358  193057 cli_runner.go:164] Run: docker run --rm --name embed-certs-173264-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-173264 --entrypoint /usr/bin/test -v embed-certs-173264:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1025 09:32:42.219452  193057 oci.go:107] Successfully prepared a docker volume embed-certs-173264
	I1025 09:32:42.219497  193057 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:32:42.219518  193057 kic.go:194] Starting extracting preloaded images to volume ...
	I1025 09:32:42.219595  193057 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21796-2312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-173264:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1025 09:32:46.247472  190179 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (2.449540386s)
	I1025 09:32:46.247496  190179 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21796-2312/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1025 09:32:46.247519  190179 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1025 09:32:46.247567  190179 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1025 09:32:48.169218  190179 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.921625275s)
	I1025 09:32:48.169244  190179 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21796-2312/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1025 09:32:48.169270  190179 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1025 09:32:48.169318  190179 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1025 09:32:48.424788  193057 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21796-2312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-173264:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (6.205158806s)
	I1025 09:32:48.424815  193057 kic.go:203] duration metric: took 6.205294603s to extract preloaded images to volume ...
	W1025 09:32:48.424949  193057 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1025 09:32:48.425057  193057 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 09:32:48.499137  193057 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-173264 --name embed-certs-173264 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-173264 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-173264 --network embed-certs-173264 --ip 192.168.85.2 --volume embed-certs-173264:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1025 09:32:48.889022  193057 cli_runner.go:164] Run: docker container inspect embed-certs-173264 --format={{.State.Running}}
	I1025 09:32:48.910574  193057 cli_runner.go:164] Run: docker container inspect embed-certs-173264 --format={{.State.Status}}
	I1025 09:32:48.939454  193057 cli_runner.go:164] Run: docker exec embed-certs-173264 stat /var/lib/dpkg/alternatives/iptables
	I1025 09:32:49.031446  193057 oci.go:144] the created container "embed-certs-173264" has a running status.
	I1025 09:32:49.031480  193057 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21796-2312/.minikube/machines/embed-certs-173264/id_rsa...
	I1025 09:32:49.594898  193057 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21796-2312/.minikube/machines/embed-certs-173264/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 09:32:49.618797  193057 cli_runner.go:164] Run: docker container inspect embed-certs-173264 --format={{.State.Status}}
	I1025 09:32:49.641715  193057 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 09:32:49.641733  193057 kic_runner.go:114] Args: [docker exec --privileged embed-certs-173264 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 09:32:49.722643  193057 cli_runner.go:164] Run: docker container inspect embed-certs-173264 --format={{.State.Status}}
	I1025 09:32:49.747950  193057 machine.go:93] provisionDockerMachine start ...
	I1025 09:32:49.748040  193057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-173264
	I1025 09:32:49.777146  193057 main.go:141] libmachine: Using SSH client type: native
	I1025 09:32:49.777478  193057 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1025 09:32:49.777493  193057 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:32:49.778226  193057 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1025 09:32:52.373303  190179 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (4.203958958s)
	I1025 09:32:52.373329  190179 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21796-2312/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1025 09:32:52.373348  190179 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1025 09:32:52.373395  190179 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1025 09:32:52.989438  190179 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21796-2312/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1025 09:32:52.989470  190179 cache_images.go:124] Successfully loaded all cached images
	I1025 09:32:52.989476  190179 cache_images.go:93] duration metric: took 18.161133539s to LoadCachedImages
	I1025 09:32:52.989486  190179 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1025 09:32:52.989580  190179 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-179869 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-179869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 09:32:52.989662  190179 ssh_runner.go:195] Run: crio config
	I1025 09:32:53.090655  190179 cni.go:84] Creating CNI manager for ""
	I1025 09:32:53.090675  190179 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:32:53.090690  190179 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 09:32:53.090715  190179 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-179869 NodeName:no-preload-179869 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:32:53.090839  190179 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-179869"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:32:53.090916  190179 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 09:32:53.100080  190179 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1025 09:32:53.100158  190179 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1025 09:32:53.108725  190179 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1025 09:32:53.109084  190179 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1025 09:32:53.109609  190179 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21796-2312/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1025 09:32:53.110088  190179 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21796-2312/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1025 09:32:53.114642  190179 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1025 09:32:53.114683  190179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1025 09:32:52.941498  193057 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-173264
	
	I1025 09:32:52.941532  193057 ubuntu.go:182] provisioning hostname "embed-certs-173264"
	I1025 09:32:52.941618  193057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-173264
	I1025 09:32:52.960452  193057 main.go:141] libmachine: Using SSH client type: native
	I1025 09:32:52.960761  193057 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1025 09:32:52.960778  193057 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-173264 && echo "embed-certs-173264" | sudo tee /etc/hostname
	I1025 09:32:53.136378  193057 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-173264
	
	I1025 09:32:53.136454  193057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-173264
	I1025 09:32:53.168063  193057 main.go:141] libmachine: Using SSH client type: native
	I1025 09:32:53.168376  193057 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1025 09:32:53.168393  193057 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-173264' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-173264/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-173264' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:32:53.352680  193057 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:32:53.352715  193057 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21796-2312/.minikube CaCertPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21796-2312/.minikube}
	I1025 09:32:53.352746  193057 ubuntu.go:190] setting up certificates
	I1025 09:32:53.352759  193057 provision.go:84] configureAuth start
	I1025 09:32:53.352822  193057 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-173264
	I1025 09:32:53.386224  193057 provision.go:143] copyHostCerts
	I1025 09:32:53.386291  193057 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-2312/.minikube/ca.pem, removing ...
	I1025 09:32:53.386309  193057 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-2312/.minikube/ca.pem
	I1025 09:32:53.386391  193057 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21796-2312/.minikube/ca.pem (1082 bytes)
	I1025 09:32:53.386484  193057 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-2312/.minikube/cert.pem, removing ...
	I1025 09:32:53.386489  193057 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-2312/.minikube/cert.pem
	I1025 09:32:53.386516  193057 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21796-2312/.minikube/cert.pem (1123 bytes)
	I1025 09:32:53.386576  193057 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-2312/.minikube/key.pem, removing ...
	I1025 09:32:53.386585  193057 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-2312/.minikube/key.pem
	I1025 09:32:53.386610  193057 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21796-2312/.minikube/key.pem (1679 bytes)
	I1025 09:32:53.386656  193057 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21796-2312/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca-key.pem org=jenkins.embed-certs-173264 san=[127.0.0.1 192.168.85.2 embed-certs-173264 localhost minikube]
	I1025 09:32:53.956627  193057 provision.go:177] copyRemoteCerts
	I1025 09:32:53.956704  193057 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:32:53.956769  193057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-173264
	I1025 09:32:54.013317  193057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/embed-certs-173264/id_rsa Username:docker}
	I1025 09:32:54.135028  193057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 09:32:54.159507  193057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 09:32:54.187616  193057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1025 09:32:54.218534  193057 provision.go:87] duration metric: took 865.751859ms to configureAuth
	I1025 09:32:54.218558  193057 ubuntu.go:206] setting minikube options for container-runtime
	I1025 09:32:54.218755  193057 config.go:182] Loaded profile config "embed-certs-173264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:32:54.218857  193057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-173264
	I1025 09:32:54.244987  193057 main.go:141] libmachine: Using SSH client type: native
	I1025 09:32:54.245283  193057 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1025 09:32:54.245301  193057 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 09:32:54.590137  193057 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:32:54.590155  193057 machine.go:96] duration metric: took 4.842183228s to provisionDockerMachine
	I1025 09:32:54.590165  193057 client.go:171] duration metric: took 13.338806527s to LocalClient.Create
	I1025 09:32:54.590177  193057 start.go:167] duration metric: took 13.338883492s to libmachine.API.Create "embed-certs-173264"
	I1025 09:32:54.590185  193057 start.go:293] postStartSetup for "embed-certs-173264" (driver="docker")
	I1025 09:32:54.590194  193057 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:32:54.590257  193057 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:32:54.590303  193057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-173264
	I1025 09:32:54.667644  193057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/embed-certs-173264/id_rsa Username:docker}
	I1025 09:32:54.829932  193057 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:32:54.835646  193057 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 09:32:54.835674  193057 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 09:32:54.835687  193057 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-2312/.minikube/addons for local assets ...
	I1025 09:32:54.835743  193057 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-2312/.minikube/files for local assets ...
	I1025 09:32:54.835825  193057 filesync.go:149] local asset: /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem -> 41102.pem in /etc/ssl/certs
	I1025 09:32:54.835923  193057 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 09:32:54.846106  193057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem --> /etc/ssl/certs/41102.pem (1708 bytes)
	I1025 09:32:54.876394  193057 start.go:296] duration metric: took 286.19594ms for postStartSetup
	I1025 09:32:54.876747  193057 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-173264
	I1025 09:32:54.901421  193057 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/embed-certs-173264/config.json ...
	I1025 09:32:54.901834  193057 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:32:54.901978  193057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-173264
	I1025 09:32:54.927110  193057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/embed-certs-173264/id_rsa Username:docker}
	I1025 09:32:55.034625  193057 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 09:32:55.042667  193057 start.go:128] duration metric: took 13.795136692s to createHost
	I1025 09:32:55.042691  193057 start.go:83] releasing machines lock for "embed-certs-173264", held for 13.795263825s
	I1025 09:32:55.042774  193057 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-173264
	I1025 09:32:55.071588  193057 ssh_runner.go:195] Run: cat /version.json
	I1025 09:32:55.071640  193057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-173264
	I1025 09:32:55.071877  193057 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:32:55.071938  193057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-173264
	I1025 09:32:55.098307  193057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/embed-certs-173264/id_rsa Username:docker}
	I1025 09:32:55.102505  193057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/embed-certs-173264/id_rsa Username:docker}
	I1025 09:32:55.214173  193057 ssh_runner.go:195] Run: systemctl --version
	I1025 09:32:55.331266  193057 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:32:55.400842  193057 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:32:55.405488  193057 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:32:55.405549  193057 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:32:55.437942  193057 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1025 09:32:55.437973  193057 start.go:495] detecting cgroup driver to use...
	I1025 09:32:55.438030  193057 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 09:32:55.438081  193057 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:32:55.464796  193057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:32:55.479547  193057 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:32:55.479606  193057 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:32:55.497949  193057 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:32:55.519021  193057 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:32:55.675247  193057 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:32:55.894669  193057 docker.go:234] disabling docker service ...
	I1025 09:32:55.894736  193057 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:32:55.932974  193057 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:32:55.961502  193057 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:32:56.180731  193057 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:32:56.365903  193057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:32:56.381760  193057 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:32:56.398490  193057 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 09:32:56.398551  193057 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:32:56.408867  193057 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 09:32:56.408937  193057 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:32:56.419877  193057 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:32:56.431128  193057 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:32:56.441716  193057 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:32:56.451915  193057 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:32:56.462461  193057 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:32:56.478911  193057 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:32:56.488417  193057 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:32:56.496853  193057 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:32:56.505121  193057 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:32:56.669973  193057 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:32:57.928036  193057 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.258023281s)
	I1025 09:32:57.928058  193057 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:32:57.928115  193057 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:32:57.932092  193057 start.go:563] Will wait 60s for crictl version
	I1025 09:32:57.932153  193057 ssh_runner.go:195] Run: which crictl
	I1025 09:32:57.936484  193057 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 09:32:57.972294  193057 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 09:32:57.972383  193057 ssh_runner.go:195] Run: crio --version
	I1025 09:32:58.004129  193057 ssh_runner.go:195] Run: crio --version
	I1025 09:32:58.048151  193057 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 09:32:54.174108  190179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:32:54.180673  190179 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1025 09:32:54.198186  190179 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1025 09:32:54.198262  190179 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1025 09:32:54.198274  190179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1025 09:32:54.216325  190179 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1025 09:32:54.216371  190179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1025 09:32:54.901510  190179 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:32:54.910466  190179 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1025 09:32:54.925077  190179 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:32:54.947396  190179 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1025 09:32:54.963137  190179 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1025 09:32:54.967535  190179 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:32:54.979411  190179 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:32:55.162664  190179 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:32:55.188264  190179 certs.go:69] Setting up /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/no-preload-179869 for IP: 192.168.76.2
	I1025 09:32:55.188288  190179 certs.go:195] generating shared ca certs ...
	I1025 09:32:55.188305  190179 certs.go:227] acquiring lock for ca certs: {Name:mke81a91ad41248e67f7acba9fb2e71e1c110e7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:32:55.188448  190179 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21796-2312/.minikube/ca.key
	I1025 09:32:55.188496  190179 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.key
	I1025 09:32:55.188508  190179 certs.go:257] generating profile certs ...
	I1025 09:32:55.188563  190179 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/no-preload-179869/client.key
	I1025 09:32:55.188579  190179 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/no-preload-179869/client.crt with IP's: []
	I1025 09:32:55.647791  190179 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/no-preload-179869/client.crt ...
	I1025 09:32:55.647823  190179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/no-preload-179869/client.crt: {Name:mk155b1acbfe508ff57620d3c29a92cfcaa239f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:32:55.648017  190179 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/no-preload-179869/client.key ...
	I1025 09:32:55.648032  190179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/no-preload-179869/client.key: {Name:mk34ad20ba1a954ac37cb68fcfaac80f36ca14ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:32:55.648133  190179 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/no-preload-179869/apiserver.key.b8f832fb
	I1025 09:32:55.648152  190179 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/no-preload-179869/apiserver.crt.b8f832fb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1025 09:32:56.668879  190179 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/no-preload-179869/apiserver.crt.b8f832fb ...
	I1025 09:32:56.668910  190179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/no-preload-179869/apiserver.crt.b8f832fb: {Name:mk8377bef70c79f379f2d8289abaf59744f021b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:32:56.669083  190179 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/no-preload-179869/apiserver.key.b8f832fb ...
	I1025 09:32:56.669100  190179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/no-preload-179869/apiserver.key.b8f832fb: {Name:mkedc23787c2eb3e115ef25b54f6845d857200b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:32:56.669174  190179 certs.go:382] copying /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/no-preload-179869/apiserver.crt.b8f832fb -> /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/no-preload-179869/apiserver.crt
	I1025 09:32:56.669255  190179 certs.go:386] copying /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/no-preload-179869/apiserver.key.b8f832fb -> /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/no-preload-179869/apiserver.key
	I1025 09:32:56.669315  190179 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/no-preload-179869/proxy-client.key
	I1025 09:32:56.669337  190179 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/no-preload-179869/proxy-client.crt with IP's: []
	I1025 09:32:57.368565  190179 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/no-preload-179869/proxy-client.crt ...
	I1025 09:32:57.368595  190179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/no-preload-179869/proxy-client.crt: {Name:mk32a64c48e715b9de1fb6641ef9ec9c72600987 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:32:57.368785  190179 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/no-preload-179869/proxy-client.key ...
	I1025 09:32:57.368801  190179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/no-preload-179869/proxy-client.key: {Name:mk66f57f2d374d28034415e3e8abb6a1f9477275 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:32:57.369024  190179 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/4110.pem (1338 bytes)
	W1025 09:32:57.369067  190179 certs.go:480] ignoring /home/jenkins/minikube-integration/21796-2312/.minikube/certs/4110_empty.pem, impossibly tiny 0 bytes
	I1025 09:32:57.369081  190179 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 09:32:57.369109  190179 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem (1082 bytes)
	I1025 09:32:57.369137  190179 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:32:57.369164  190179 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/key.pem (1679 bytes)
	I1025 09:32:57.369208  190179 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem (1708 bytes)
	I1025 09:32:57.369765  190179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:32:57.388303  190179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 09:32:57.409500  190179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:32:57.428905  190179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 09:32:57.458725  190179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/no-preload-179869/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1025 09:32:57.477707  190179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/no-preload-179869/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 09:32:57.495901  190179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/no-preload-179869/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:32:57.515849  190179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/no-preload-179869/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 09:32:57.534308  190179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:32:57.553002  190179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/certs/4110.pem --> /usr/share/ca-certificates/4110.pem (1338 bytes)
	I1025 09:32:57.572814  190179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem --> /usr/share/ca-certificates/41102.pem (1708 bytes)
	I1025 09:32:57.591350  190179 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:32:57.604714  190179 ssh_runner.go:195] Run: openssl version
	I1025 09:32:57.611495  190179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:32:57.620107  190179 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:32:57.624039  190179 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:32:57.624114  190179 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:32:57.665490  190179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:32:57.674451  190179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4110.pem && ln -fs /usr/share/ca-certificates/4110.pem /etc/ssl/certs/4110.pem"
	I1025 09:32:57.683060  190179 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4110.pem
	I1025 09:32:57.687256  190179 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 08:37 /usr/share/ca-certificates/4110.pem
	I1025 09:32:57.687336  190179 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4110.pem
	I1025 09:32:57.729114  190179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4110.pem /etc/ssl/certs/51391683.0"
	I1025 09:32:57.738168  190179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41102.pem && ln -fs /usr/share/ca-certificates/41102.pem /etc/ssl/certs/41102.pem"
	I1025 09:32:57.746997  190179 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41102.pem
	I1025 09:32:57.751307  190179 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 08:37 /usr/share/ca-certificates/41102.pem
	I1025 09:32:57.751373  190179 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41102.pem
	I1025 09:32:57.793145  190179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41102.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 09:32:57.802979  190179 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:32:57.807469  190179 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 09:32:57.807544  190179 kubeadm.go:400] StartCluster: {Name:no-preload-179869 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-179869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:32:57.807629  190179 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:32:57.807689  190179 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:32:57.847221  190179 cri.go:89] found id: ""
	I1025 09:32:57.847364  190179 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 09:32:57.855819  190179 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 09:32:57.868613  190179 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1025 09:32:57.868731  190179 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 09:32:57.880051  190179 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 09:32:57.880119  190179 kubeadm.go:157] found existing configuration files:
	
	I1025 09:32:57.880202  190179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 09:32:57.890154  190179 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 09:32:57.890260  190179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 09:32:57.900349  190179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 09:32:57.910933  190179 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 09:32:57.911046  190179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 09:32:57.920550  190179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 09:32:57.935132  190179 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 09:32:57.935250  190179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 09:32:57.944848  190179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 09:32:57.955562  190179 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 09:32:57.955676  190179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 09:32:57.964929  190179 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 09:32:58.026674  190179 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1025 09:32:58.026929  190179 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 09:32:58.060540  190179 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1025 09:32:58.060616  190179 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1025 09:32:58.060653  190179 kubeadm.go:318] OS: Linux
	I1025 09:32:58.060710  190179 kubeadm.go:318] CGROUPS_CPU: enabled
	I1025 09:32:58.060767  190179 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1025 09:32:58.060816  190179 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1025 09:32:58.060866  190179 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1025 09:32:58.060916  190179 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1025 09:32:58.060966  190179 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1025 09:32:58.061014  190179 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1025 09:32:58.061065  190179 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1025 09:32:58.061112  190179 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1025 09:32:58.158544  190179 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 09:32:58.158656  190179 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 09:32:58.158751  190179 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 09:32:58.206453  190179 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 09:32:58.211863  190179 out.go:252]   - Generating certificates and keys ...
	I1025 09:32:58.211958  190179 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 09:32:58.212028  190179 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 09:32:58.050836  193057 cli_runner.go:164] Run: docker network inspect embed-certs-173264 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:32:58.083816  193057 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1025 09:32:58.088253  193057 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:32:58.098367  193057 kubeadm.go:883] updating cluster {Name:embed-certs-173264 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-173264 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 09:32:58.098473  193057 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:32:58.098529  193057 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:32:58.144148  193057 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:32:58.144168  193057 crio.go:433] Images already preloaded, skipping extraction
	I1025 09:32:58.144225  193057 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:32:58.196726  193057 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:32:58.196794  193057 cache_images.go:85] Images are preloaded, skipping loading
	I1025 09:32:58.196816  193057 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1025 09:32:58.196933  193057 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-173264 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-173264 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 09:32:58.197068  193057 ssh_runner.go:195] Run: crio config
	I1025 09:32:58.270078  193057 cni.go:84] Creating CNI manager for ""
	I1025 09:32:58.270098  193057 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:32:58.270112  193057 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 09:32:58.270161  193057 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-173264 NodeName:embed-certs-173264 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:32:58.270284  193057 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-173264"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:32:58.270361  193057 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 09:32:58.279101  193057 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 09:32:58.279168  193057 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:32:58.287485  193057 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1025 09:32:58.302880  193057 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:32:58.316742  193057 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1025 09:32:58.330727  193057 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1025 09:32:58.334281  193057 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:32:58.343980  193057 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:32:58.494886  193057 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:32:58.513012  193057 certs.go:69] Setting up /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/embed-certs-173264 for IP: 192.168.85.2
	I1025 09:32:58.513035  193057 certs.go:195] generating shared ca certs ...
	I1025 09:32:58.513051  193057 certs.go:227] acquiring lock for ca certs: {Name:mke81a91ad41248e67f7acba9fb2e71e1c110e7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:32:58.513189  193057 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21796-2312/.minikube/ca.key
	I1025 09:32:58.513236  193057 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.key
	I1025 09:32:58.513247  193057 certs.go:257] generating profile certs ...
	I1025 09:32:58.513303  193057 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/embed-certs-173264/client.key
	I1025 09:32:58.513330  193057 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/embed-certs-173264/client.crt with IP's: []
	I1025 09:32:59.088688  193057 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/embed-certs-173264/client.crt ...
	I1025 09:32:59.088719  193057 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/embed-certs-173264/client.crt: {Name:mkf75de905c96a533c5350d7f1329bc35509d678 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:32:59.088976  193057 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/embed-certs-173264/client.key ...
	I1025 09:32:59.088993  193057 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/embed-certs-173264/client.key: {Name:mk923e39714fb07b066f74ec53556896ac66e957 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:32:59.089109  193057 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/embed-certs-173264/apiserver.key.cec4835f
	I1025 09:32:59.089129  193057 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/embed-certs-173264/apiserver.crt.cec4835f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1025 09:32:59.876227  193057 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/embed-certs-173264/apiserver.crt.cec4835f ...
	I1025 09:32:59.876261  193057 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/embed-certs-173264/apiserver.crt.cec4835f: {Name:mkb6ec1ea20f6678af23c881f65e92513cf87bb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:32:59.876465  193057 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/embed-certs-173264/apiserver.key.cec4835f ...
	I1025 09:32:59.876479  193057 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/embed-certs-173264/apiserver.key.cec4835f: {Name:mkefb7deafbe58361525a9b85cbdde5c86d7d1d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:32:59.876568  193057 certs.go:382] copying /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/embed-certs-173264/apiserver.crt.cec4835f -> /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/embed-certs-173264/apiserver.crt
	I1025 09:32:59.876654  193057 certs.go:386] copying /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/embed-certs-173264/apiserver.key.cec4835f -> /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/embed-certs-173264/apiserver.key
	I1025 09:32:59.876719  193057 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/embed-certs-173264/proxy-client.key
	I1025 09:32:59.876737  193057 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/embed-certs-173264/proxy-client.crt with IP's: []
	I1025 09:33:00.774606  193057 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/embed-certs-173264/proxy-client.crt ...
	I1025 09:33:00.774636  193057 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/embed-certs-173264/proxy-client.crt: {Name:mk9070755fe21635100036c63f35a57169baf53e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:33:00.774806  193057 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/embed-certs-173264/proxy-client.key ...
	I1025 09:33:00.774825  193057 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/embed-certs-173264/proxy-client.key: {Name:mk16d33803bf4eb5a24599231bc5f9348df4d135 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:33:00.774996  193057 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/4110.pem (1338 bytes)
	W1025 09:33:00.775044  193057 certs.go:480] ignoring /home/jenkins/minikube-integration/21796-2312/.minikube/certs/4110_empty.pem, impossibly tiny 0 bytes
	I1025 09:33:00.775058  193057 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 09:33:00.775084  193057 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem (1082 bytes)
	I1025 09:33:00.775113  193057 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:33:00.775140  193057 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/key.pem (1679 bytes)
	I1025 09:33:00.775184  193057 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem (1708 bytes)
	I1025 09:33:00.775748  193057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:33:00.798818  193057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 09:33:00.827804  193057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:33:00.847110  193057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 09:33:00.866174  193057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/embed-certs-173264/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1025 09:33:00.884965  193057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/embed-certs-173264/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 09:33:00.904771  193057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/embed-certs-173264/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:33:00.923558  193057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/embed-certs-173264/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 09:33:00.942755  193057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/certs/4110.pem --> /usr/share/ca-certificates/4110.pem (1338 bytes)
	I1025 09:33:00.963204  193057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem --> /usr/share/ca-certificates/41102.pem (1708 bytes)
	I1025 09:33:00.983183  193057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:33:01.003830  193057 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:33:01.019114  193057 ssh_runner.go:195] Run: openssl version
	I1025 09:33:01.025808  193057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4110.pem && ln -fs /usr/share/ca-certificates/4110.pem /etc/ssl/certs/4110.pem"
	I1025 09:33:01.034966  193057 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4110.pem
	I1025 09:33:01.039322  193057 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 08:37 /usr/share/ca-certificates/4110.pem
	I1025 09:33:01.039389  193057 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4110.pem
	I1025 09:33:01.080963  193057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4110.pem /etc/ssl/certs/51391683.0"
	I1025 09:33:01.090052  193057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41102.pem && ln -fs /usr/share/ca-certificates/41102.pem /etc/ssl/certs/41102.pem"
	I1025 09:33:01.099112  193057 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41102.pem
	I1025 09:33:01.104264  193057 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 08:37 /usr/share/ca-certificates/41102.pem
	I1025 09:33:01.104338  193057 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41102.pem
	I1025 09:33:01.146441  193057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41102.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 09:33:01.155746  193057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:33:01.164742  193057 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:33:01.169288  193057 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:33:01.169356  193057 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:33:01.211520  193057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:33:01.220851  193057 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:33:01.225595  193057 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 09:33:01.225658  193057 kubeadm.go:400] StartCluster: {Name:embed-certs-173264 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-173264 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:33:01.225747  193057 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:33:01.225823  193057 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:33:01.255217  193057 cri.go:89] found id: ""
	I1025 09:33:01.255304  193057 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 09:33:01.265548  193057 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 09:33:01.274039  193057 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1025 09:33:01.274112  193057 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 09:33:01.285270  193057 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 09:33:01.285291  193057 kubeadm.go:157] found existing configuration files:
	
	I1025 09:33:01.285351  193057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 09:33:01.294749  193057 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 09:33:01.294821  193057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 09:33:01.304200  193057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 09:33:01.313395  193057 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 09:33:01.313470  193057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 09:33:01.321849  193057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 09:33:01.331262  193057 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 09:33:01.331335  193057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 09:33:01.339815  193057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 09:33:01.348947  193057 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 09:33:01.349030  193057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 09:33:01.357318  193057 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 09:33:01.446513  193057 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1025 09:33:01.446602  193057 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 09:33:01.506657  193057 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1025 09:33:01.506748  193057 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1025 09:33:01.506818  193057 kubeadm.go:318] OS: Linux
	I1025 09:33:01.506895  193057 kubeadm.go:318] CGROUPS_CPU: enabled
	I1025 09:33:01.506965  193057 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1025 09:33:01.507043  193057 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1025 09:33:01.507110  193057 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1025 09:33:01.507169  193057 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1025 09:33:01.507228  193057 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1025 09:33:01.507293  193057 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1025 09:33:01.507350  193057 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1025 09:33:01.507407  193057 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1025 09:33:01.608768  193057 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 09:33:01.609012  193057 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 09:33:01.609167  193057 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 09:33:01.626367  193057 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 09:32:58.997290  190179 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 09:32:59.372522  190179 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 09:33:00.425516  190179 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 09:33:00.988126  190179 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 09:33:01.354229  190179 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 09:33:01.354836  190179 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-179869] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1025 09:33:01.796425  190179 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 09:33:01.796977  190179 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-179869] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1025 09:33:02.147532  190179 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 09:33:02.366562  190179 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 09:33:03.062258  190179 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 09:33:03.062766  190179 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 09:33:03.728692  190179 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 09:33:04.241180  190179 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1025 09:33:04.955318  190179 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 09:33:05.233684  190179 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 09:33:05.638360  190179 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 09:33:05.638461  190179 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 09:33:05.638531  190179 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 09:33:01.632024  193057 out.go:252]   - Generating certificates and keys ...
	I1025 09:33:01.632181  193057 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 09:33:01.632308  193057 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 09:33:01.971151  193057 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 09:33:02.471204  193057 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 09:33:03.612026  193057 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 09:33:03.977986  193057 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 09:33:04.835311  193057 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 09:33:04.835919  193057 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [embed-certs-173264 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1025 09:33:05.491535  193057 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 09:33:05.492083  193057 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-173264 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1025 09:33:05.642065  190179 out.go:252]   - Booting up control plane ...
	I1025 09:33:05.642172  190179 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 09:33:05.642254  190179 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 09:33:05.642324  190179 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 09:33:05.659113  190179 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 09:33:05.659235  190179 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1025 09:33:05.673123  190179 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1025 09:33:05.673232  190179 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 09:33:05.673276  190179 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1025 09:33:05.844878  190179 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1025 09:33:05.845006  190179 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1025 09:33:07.350554  190179 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.50189985s
	I1025 09:33:07.350673  190179 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1025 09:33:07.350764  190179 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1025 09:33:07.350861  190179 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1025 09:33:07.350948  190179 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1025 09:33:06.084984  193057 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 09:33:06.508410  193057 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 09:33:07.646538  193057 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 09:33:07.647613  193057 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 09:33:08.238740  193057 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 09:33:09.073938  193057 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1025 09:33:09.590994  193057 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 09:33:10.127771  193057 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 09:33:10.663233  193057 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 09:33:10.664413  193057 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 09:33:10.667580  193057 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 09:33:10.670849  193057 out.go:252]   - Booting up control plane ...
	I1025 09:33:10.670960  193057 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 09:33:10.671049  193057 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 09:33:10.672067  193057 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 09:33:10.700544  193057 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 09:33:10.700732  193057 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1025 09:33:10.712839  193057 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1025 09:33:10.713018  193057 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 09:33:10.713096  193057 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1025 09:33:12.283272  190179 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 4.931726071s
	I1025 09:33:10.945423  193057 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1025 09:33:10.945608  193057 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1025 09:33:12.450399  193057 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501828353s
	I1025 09:33:12.450587  193057 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1025 09:33:12.450722  193057 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1025 09:33:12.450855  193057 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1025 09:33:12.450990  193057 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1025 09:33:15.324337  190179 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 7.972333018s
	I1025 09:33:16.352720  190179 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 9.001889092s
	I1025 09:33:16.387652  190179 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 09:33:16.413335  190179 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 09:33:16.442892  190179 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 09:33:16.443351  190179 kubeadm.go:318] [mark-control-plane] Marking the node no-preload-179869 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 09:33:16.464915  190179 kubeadm.go:318] [bootstrap-token] Using token: 78xcbn.67twwpcxssm5f0fe
	I1025 09:33:16.469934  190179 out.go:252]   - Configuring RBAC rules ...
	I1025 09:33:16.470085  190179 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 09:33:16.482825  190179 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 09:33:16.506433  190179 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 09:33:16.514381  190179 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 09:33:16.522361  190179 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 09:33:16.528443  190179 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 09:33:16.765808  190179 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 09:33:17.314739  190179 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1025 09:33:17.778252  190179 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1025 09:33:17.778271  190179 kubeadm.go:318] 
	I1025 09:33:17.778334  190179 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1025 09:33:17.778344  190179 kubeadm.go:318] 
	I1025 09:33:17.778424  190179 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1025 09:33:17.778430  190179 kubeadm.go:318] 
	I1025 09:33:17.778456  190179 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1025 09:33:17.778517  190179 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 09:33:17.778570  190179 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 09:33:17.778575  190179 kubeadm.go:318] 
	I1025 09:33:17.778631  190179 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1025 09:33:17.778635  190179 kubeadm.go:318] 
	I1025 09:33:17.778684  190179 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 09:33:17.778689  190179 kubeadm.go:318] 
	I1025 09:33:17.778743  190179 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1025 09:33:17.778821  190179 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 09:33:17.778892  190179 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 09:33:17.778896  190179 kubeadm.go:318] 
	I1025 09:33:17.778984  190179 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 09:33:17.779064  190179 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1025 09:33:17.779068  190179 kubeadm.go:318] 
	I1025 09:33:17.779155  190179 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 78xcbn.67twwpcxssm5f0fe \
	I1025 09:33:17.779263  190179 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:e54eda66e4b00d0a990c315bf332782d8922fe6750f8b2bc2791b23f9457095b \
	I1025 09:33:17.779285  190179 kubeadm.go:318] 	--control-plane 
	I1025 09:33:17.779289  190179 kubeadm.go:318] 
	I1025 09:33:17.779377  190179 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1025 09:33:17.779382  190179 kubeadm.go:318] 
	I1025 09:33:17.779468  190179 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 78xcbn.67twwpcxssm5f0fe \
	I1025 09:33:17.779575  190179 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:e54eda66e4b00d0a990c315bf332782d8922fe6750f8b2bc2791b23f9457095b 
	I1025 09:33:17.780548  190179 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1025 09:33:17.780780  190179 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1025 09:33:17.780889  190179 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 09:33:17.780904  190179 cni.go:84] Creating CNI manager for ""
	I1025 09:33:17.780910  190179 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:33:17.787884  190179 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1025 09:33:17.790237  190179 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 09:33:17.797940  190179 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1025 09:33:17.797957  190179 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1025 09:33:17.820606  190179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 09:33:18.270948  190179 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 09:33:18.271067  190179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:33:18.271131  190179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-179869 minikube.k8s.io/updated_at=2025_10_25T09_33_18_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c3de5ce1075e776fa55a26fe8396669cc53a4373 minikube.k8s.io/name=no-preload-179869 minikube.k8s.io/primary=true
	I1025 09:33:18.530757  190179 ops.go:34] apiserver oom_adj: -16
	I1025 09:33:18.530859  190179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:33:16.696926  193057 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 4.245085206s
	I1025 09:33:19.337623  193057 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 6.887354441s
	I1025 09:33:21.459642  193057 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 9.008286899s
	I1025 09:33:21.496089  193057 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 09:33:21.528622  193057 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 09:33:21.560728  193057 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 09:33:21.560957  193057 kubeadm.go:318] [mark-control-plane] Marking the node embed-certs-173264 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 09:33:21.585704  193057 kubeadm.go:318] [bootstrap-token] Using token: blbziq.sjsfmlwe8uxq295o
	I1025 09:33:19.031172  190179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:33:19.531174  190179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:33:20.031658  190179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:33:20.530968  190179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:33:21.031213  190179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:33:21.531533  190179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:33:22.031479  190179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:33:22.162536  190179 kubeadm.go:1113] duration metric: took 3.891510583s to wait for elevateKubeSystemPrivileges
	I1025 09:33:22.162568  190179 kubeadm.go:402] duration metric: took 24.355046417s to StartCluster
	I1025 09:33:22.162586  190179 settings.go:142] acquiring lock: {Name:mk0d8a31751f5e359928da7d7271367bd4b397fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:33:22.162647  190179 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21796-2312/kubeconfig
	I1025 09:33:22.163361  190179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/kubeconfig: {Name:mk29749a0edbb941c188588ea6aef25a517ab571 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:33:22.163585  190179 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:33:22.163717  190179 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 09:33:22.163951  190179 config.go:182] Loaded profile config "no-preload-179869": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:33:22.163949  190179 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 09:33:22.164032  190179 addons.go:69] Setting storage-provisioner=true in profile "no-preload-179869"
	I1025 09:33:22.164047  190179 addons.go:238] Setting addon storage-provisioner=true in "no-preload-179869"
	I1025 09:33:22.164068  190179 addons.go:69] Setting default-storageclass=true in profile "no-preload-179869"
	I1025 09:33:22.164091  190179 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-179869"
	I1025 09:33:22.164071  190179 host.go:66] Checking if "no-preload-179869" exists ...
	I1025 09:33:22.164469  190179 cli_runner.go:164] Run: docker container inspect no-preload-179869 --format={{.State.Status}}
	I1025 09:33:22.164588  190179 cli_runner.go:164] Run: docker container inspect no-preload-179869 --format={{.State.Status}}
	I1025 09:33:22.167381  190179 out.go:179] * Verifying Kubernetes components...
	I1025 09:33:22.172142  190179 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:33:22.202726  190179 addons.go:238] Setting addon default-storageclass=true in "no-preload-179869"
	I1025 09:33:22.202763  190179 host.go:66] Checking if "no-preload-179869" exists ...
	I1025 09:33:22.203181  190179 cli_runner.go:164] Run: docker container inspect no-preload-179869 --format={{.State.Status}}
	I1025 09:33:22.210621  190179 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 09:33:21.588793  193057 out.go:252]   - Configuring RBAC rules ...
	I1025 09:33:21.588937  193057 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 09:33:21.607425  193057 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 09:33:21.624738  193057 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 09:33:21.632580  193057 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 09:33:21.640023  193057 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 09:33:21.646995  193057 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 09:33:21.868896  193057 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 09:33:22.338596  193057 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1025 09:33:22.867971  193057 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1025 09:33:22.869426  193057 kubeadm.go:318] 
	I1025 09:33:22.869507  193057 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1025 09:33:22.869513  193057 kubeadm.go:318] 
	I1025 09:33:22.869594  193057 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1025 09:33:22.869606  193057 kubeadm.go:318] 
	I1025 09:33:22.869633  193057 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1025 09:33:22.870838  193057 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 09:33:22.870905  193057 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 09:33:22.870917  193057 kubeadm.go:318] 
	I1025 09:33:22.870976  193057 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1025 09:33:22.870987  193057 kubeadm.go:318] 
	I1025 09:33:22.871038  193057 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 09:33:22.871047  193057 kubeadm.go:318] 
	I1025 09:33:22.871101  193057 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1025 09:33:22.871184  193057 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 09:33:22.871260  193057 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 09:33:22.871285  193057 kubeadm.go:318] 
	I1025 09:33:22.871595  193057 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 09:33:22.871686  193057 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1025 09:33:22.871701  193057 kubeadm.go:318] 
	I1025 09:33:22.871976  193057 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token blbziq.sjsfmlwe8uxq295o \
	I1025 09:33:22.872094  193057 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:e54eda66e4b00d0a990c315bf332782d8922fe6750f8b2bc2791b23f9457095b \
	I1025 09:33:22.873402  193057 kubeadm.go:318] 	--control-plane 
	I1025 09:33:22.873421  193057 kubeadm.go:318] 
	I1025 09:33:22.873681  193057 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1025 09:33:22.873694  193057 kubeadm.go:318] 
	I1025 09:33:22.873961  193057 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token blbziq.sjsfmlwe8uxq295o \
	I1025 09:33:22.874278  193057 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:e54eda66e4b00d0a990c315bf332782d8922fe6750f8b2bc2791b23f9457095b 
	I1025 09:33:22.880469  193057 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1025 09:33:22.880704  193057 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1025 09:33:22.880812  193057 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 09:33:22.880836  193057 cni.go:84] Creating CNI manager for ""
	I1025 09:33:22.880844  193057 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:33:22.884112  193057 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1025 09:33:22.213587  190179 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:33:22.213608  190179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 09:33:22.213679  190179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179869
	I1025 09:33:22.250249  190179 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 09:33:22.250272  190179 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 09:33:22.250333  190179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179869
	I1025 09:33:22.270320  190179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/no-preload-179869/id_rsa Username:docker}
	I1025 09:33:22.293369  190179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/no-preload-179869/id_rsa Username:docker}
	I1025 09:33:22.819314  190179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 09:33:22.869845  190179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:33:22.976062  190179 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:33:22.976263  190179 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 09:33:24.142350  190179 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.16605826s)
	I1025 09:33:24.142382  190179 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1025 09:33:24.143311  190179 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.167226103s)
	I1025 09:33:24.143947  190179 node_ready.go:35] waiting up to 6m0s for node "no-preload-179869" to be "Ready" ...
	I1025 09:33:24.144217  190179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.274344582s)
	I1025 09:33:24.147510  190179 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1025 09:33:22.887426  193057 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 09:33:22.894423  193057 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1025 09:33:22.894441  193057 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1025 09:33:22.918533  193057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 09:33:23.446721  193057 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 09:33:23.446857  193057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:33:23.446932  193057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-173264 minikube.k8s.io/updated_at=2025_10_25T09_33_23_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c3de5ce1075e776fa55a26fe8396669cc53a4373 minikube.k8s.io/name=embed-certs-173264 minikube.k8s.io/primary=true
	I1025 09:33:23.847400  193057 ops.go:34] apiserver oom_adj: -16
	I1025 09:33:23.847499  193057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:33:24.347583  193057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:33:24.847637  193057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:33:25.348493  193057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:33:25.848330  193057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:33:26.347737  193057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:33:26.613542  193057 kubeadm.go:1113] duration metric: took 3.166730595s to wait for elevateKubeSystemPrivileges
	I1025 09:33:26.613569  193057 kubeadm.go:402] duration metric: took 25.387924394s to StartCluster
	I1025 09:33:26.613586  193057 settings.go:142] acquiring lock: {Name:mk0d8a31751f5e359928da7d7271367bd4b397fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:33:26.613665  193057 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21796-2312/kubeconfig
	I1025 09:33:26.615027  193057 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/kubeconfig: {Name:mk29749a0edbb941c188588ea6aef25a517ab571 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:33:26.615233  193057 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:33:26.615360  193057 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 09:33:26.615614  193057 config.go:182] Loaded profile config "embed-certs-173264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:33:26.615589  193057 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 09:33:26.615668  193057 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-173264"
	I1025 09:33:26.615684  193057 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-173264"
	I1025 09:33:26.615707  193057 host.go:66] Checking if "embed-certs-173264" exists ...
	I1025 09:33:26.616208  193057 cli_runner.go:164] Run: docker container inspect embed-certs-173264 --format={{.State.Status}}
	I1025 09:33:26.616345  193057 addons.go:69] Setting default-storageclass=true in profile "embed-certs-173264"
	I1025 09:33:26.616366  193057 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-173264"
	I1025 09:33:26.616639  193057 cli_runner.go:164] Run: docker container inspect embed-certs-173264 --format={{.State.Status}}
	I1025 09:33:26.621692  193057 out.go:179] * Verifying Kubernetes components...
	I1025 09:33:26.626446  193057 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:33:26.654276  193057 addons.go:238] Setting addon default-storageclass=true in "embed-certs-173264"
	I1025 09:33:26.654317  193057 host.go:66] Checking if "embed-certs-173264" exists ...
	I1025 09:33:26.654780  193057 cli_runner.go:164] Run: docker container inspect embed-certs-173264 --format={{.State.Status}}
	I1025 09:33:26.670491  193057 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 09:33:24.150519  190179 addons.go:514] duration metric: took 1.986552914s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1025 09:33:24.651178  190179 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-179869" context rescaled to 1 replicas
	W1025 09:33:26.147522  190179 node_ready.go:57] node "no-preload-179869" has "Ready":"False" status (will retry)
	W1025 09:33:28.150519  190179 node_ready.go:57] node "no-preload-179869" has "Ready":"False" status (will retry)
	I1025 09:33:26.674567  193057 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:33:26.674588  193057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 09:33:26.674650  193057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-173264
	I1025 09:33:26.690227  193057 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 09:33:26.690268  193057 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 09:33:26.690334  193057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-173264
	I1025 09:33:26.717010  193057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/embed-certs-173264/id_rsa Username:docker}
	I1025 09:33:26.737538  193057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/embed-certs-173264/id_rsa Username:docker}
	I1025 09:33:27.098391  193057 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 09:33:27.098514  193057 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:33:27.180076  193057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:33:27.252237  193057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 09:33:28.521325  193057 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.422782925s)
	I1025 09:33:28.521399  193057 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.422981672s)
	I1025 09:33:28.521506  193057 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1025 09:33:28.524713  193057 node_ready.go:35] waiting up to 6m0s for node "embed-certs-173264" to be "Ready" ...
	I1025 09:33:28.918908  193057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.738754995s)
	I1025 09:33:28.919015  193057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.666708655s)
	I1025 09:33:28.965258  193057 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1025 09:33:28.968543  193057 addons.go:514] duration metric: took 2.352938907s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1025 09:33:29.026194  193057 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-173264" context rescaled to 1 replicas
	W1025 09:33:30.528232  193057 node_ready.go:57] node "embed-certs-173264" has "Ready":"False" status (will retry)
	W1025 09:33:30.646722  190179 node_ready.go:57] node "no-preload-179869" has "Ready":"False" status (will retry)
	W1025 09:33:33.146641  190179 node_ready.go:57] node "no-preload-179869" has "Ready":"False" status (will retry)
	W1025 09:33:32.528417  193057 node_ready.go:57] node "embed-certs-173264" has "Ready":"False" status (will retry)
	W1025 09:33:35.028739  193057 node_ready.go:57] node "embed-certs-173264" has "Ready":"False" status (will retry)
	W1025 09:33:35.147840  190179 node_ready.go:57] node "no-preload-179869" has "Ready":"False" status (will retry)
	W1025 09:33:37.647296  190179 node_ready.go:57] node "no-preload-179869" has "Ready":"False" status (will retry)
	W1025 09:33:37.528444  193057 node_ready.go:57] node "embed-certs-173264" has "Ready":"False" status (will retry)
	W1025 09:33:40.031417  193057 node_ready.go:57] node "embed-certs-173264" has "Ready":"False" status (will retry)
	W1025 09:33:39.655318  190179 node_ready.go:57] node "no-preload-179869" has "Ready":"False" status (will retry)
	I1025 09:33:40.161550  190179 node_ready.go:49] node "no-preload-179869" is "Ready"
	I1025 09:33:40.161574  190179 node_ready.go:38] duration metric: took 16.017599509s for node "no-preload-179869" to be "Ready" ...
	I1025 09:33:40.161586  190179 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:33:40.161648  190179 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:33:40.187120  190179 api_server.go:72] duration metric: took 18.023486299s to wait for apiserver process to appear ...
	I1025 09:33:40.187142  190179 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:33:40.187161  190179 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1025 09:33:40.195604  190179 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1025 09:33:40.196751  190179 api_server.go:141] control plane version: v1.34.1
	I1025 09:33:40.196796  190179 api_server.go:131] duration metric: took 9.646401ms to wait for apiserver health ...
	I1025 09:33:40.196820  190179 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 09:33:40.200663  190179 system_pods.go:59] 8 kube-system pods found
	I1025 09:33:40.200746  190179 system_pods.go:61] "coredns-66bc5c9577-b266v" [d1a6a6b7-d2c0-444d-9175-6d41c4ef8fb3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:33:40.200768  190179 system_pods.go:61] "etcd-no-preload-179869" [cdaf6de4-a62f-457b-82b1-4dc104ae6ae4] Running
	I1025 09:33:40.200808  190179 system_pods.go:61] "kindnet-qjcqv" [0f0b6489-3c8c-4913-8326-531c043c9c46] Running
	I1025 09:33:40.200837  190179 system_pods.go:61] "kube-apiserver-no-preload-179869" [c5b4adc3-d73d-4bca-8ebd-0143daaea5ce] Running
	I1025 09:33:40.200866  190179 system_pods.go:61] "kube-controller-manager-no-preload-179869" [42abd421-0b70-416d-8b18-3cf907e5ebaf] Running
	I1025 09:33:40.200890  190179 system_pods.go:61] "kube-proxy-7xf9w" [61407858-e6fa-4653-84c8-b20276862f78] Running
	I1025 09:33:40.200930  190179 system_pods.go:61] "kube-scheduler-no-preload-179869" [2aac24b2-974a-49c6-9121-2e1c065f57c3] Running
	I1025 09:33:40.200960  190179 system_pods.go:61] "storage-provisioner" [cf5a7700-d7da-4636-9fd4-863fbc14f1bf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:33:40.200987  190179 system_pods.go:74] duration metric: took 4.146572ms to wait for pod list to return data ...
	I1025 09:33:40.201009  190179 default_sa.go:34] waiting for default service account to be created ...
	I1025 09:33:40.203783  190179 default_sa.go:45] found service account: "default"
	I1025 09:33:40.203807  190179 default_sa.go:55] duration metric: took 2.767896ms for default service account to be created ...
	I1025 09:33:40.203817  190179 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 09:33:40.207253  190179 system_pods.go:86] 8 kube-system pods found
	I1025 09:33:40.207290  190179 system_pods.go:89] "coredns-66bc5c9577-b266v" [d1a6a6b7-d2c0-444d-9175-6d41c4ef8fb3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:33:40.207297  190179 system_pods.go:89] "etcd-no-preload-179869" [cdaf6de4-a62f-457b-82b1-4dc104ae6ae4] Running
	I1025 09:33:40.207303  190179 system_pods.go:89] "kindnet-qjcqv" [0f0b6489-3c8c-4913-8326-531c043c9c46] Running
	I1025 09:33:40.207309  190179 system_pods.go:89] "kube-apiserver-no-preload-179869" [c5b4adc3-d73d-4bca-8ebd-0143daaea5ce] Running
	I1025 09:33:40.207313  190179 system_pods.go:89] "kube-controller-manager-no-preload-179869" [42abd421-0b70-416d-8b18-3cf907e5ebaf] Running
	I1025 09:33:40.207317  190179 system_pods.go:89] "kube-proxy-7xf9w" [61407858-e6fa-4653-84c8-b20276862f78] Running
	I1025 09:33:40.207321  190179 system_pods.go:89] "kube-scheduler-no-preload-179869" [2aac24b2-974a-49c6-9121-2e1c065f57c3] Running
	I1025 09:33:40.207328  190179 system_pods.go:89] "storage-provisioner" [cf5a7700-d7da-4636-9fd4-863fbc14f1bf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:33:40.207355  190179 retry.go:31] will retry after 298.595377ms: missing components: kube-dns
	I1025 09:33:40.510119  190179 system_pods.go:86] 8 kube-system pods found
	I1025 09:33:40.510158  190179 system_pods.go:89] "coredns-66bc5c9577-b266v" [d1a6a6b7-d2c0-444d-9175-6d41c4ef8fb3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:33:40.510166  190179 system_pods.go:89] "etcd-no-preload-179869" [cdaf6de4-a62f-457b-82b1-4dc104ae6ae4] Running
	I1025 09:33:40.510183  190179 system_pods.go:89] "kindnet-qjcqv" [0f0b6489-3c8c-4913-8326-531c043c9c46] Running
	I1025 09:33:40.510188  190179 system_pods.go:89] "kube-apiserver-no-preload-179869" [c5b4adc3-d73d-4bca-8ebd-0143daaea5ce] Running
	I1025 09:33:40.510193  190179 system_pods.go:89] "kube-controller-manager-no-preload-179869" [42abd421-0b70-416d-8b18-3cf907e5ebaf] Running
	I1025 09:33:40.510211  190179 system_pods.go:89] "kube-proxy-7xf9w" [61407858-e6fa-4653-84c8-b20276862f78] Running
	I1025 09:33:40.510222  190179 system_pods.go:89] "kube-scheduler-no-preload-179869" [2aac24b2-974a-49c6-9121-2e1c065f57c3] Running
	I1025 09:33:40.510228  190179 system_pods.go:89] "storage-provisioner" [cf5a7700-d7da-4636-9fd4-863fbc14f1bf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:33:40.510243  190179 retry.go:31] will retry after 385.049693ms: missing components: kube-dns
	I1025 09:33:40.912134  190179 system_pods.go:86] 8 kube-system pods found
	I1025 09:33:40.912169  190179 system_pods.go:89] "coredns-66bc5c9577-b266v" [d1a6a6b7-d2c0-444d-9175-6d41c4ef8fb3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:33:40.912178  190179 system_pods.go:89] "etcd-no-preload-179869" [cdaf6de4-a62f-457b-82b1-4dc104ae6ae4] Running
	I1025 09:33:40.912183  190179 system_pods.go:89] "kindnet-qjcqv" [0f0b6489-3c8c-4913-8326-531c043c9c46] Running
	I1025 09:33:40.912188  190179 system_pods.go:89] "kube-apiserver-no-preload-179869" [c5b4adc3-d73d-4bca-8ebd-0143daaea5ce] Running
	I1025 09:33:40.912203  190179 system_pods.go:89] "kube-controller-manager-no-preload-179869" [42abd421-0b70-416d-8b18-3cf907e5ebaf] Running
	I1025 09:33:40.912209  190179 system_pods.go:89] "kube-proxy-7xf9w" [61407858-e6fa-4653-84c8-b20276862f78] Running
	I1025 09:33:40.912213  190179 system_pods.go:89] "kube-scheduler-no-preload-179869" [2aac24b2-974a-49c6-9121-2e1c065f57c3] Running
	I1025 09:33:40.912217  190179 system_pods.go:89] "storage-provisioner" [cf5a7700-d7da-4636-9fd4-863fbc14f1bf] Running
	I1025 09:33:40.912225  190179 system_pods.go:126] duration metric: took 708.403147ms to wait for k8s-apps to be running ...
	I1025 09:33:40.912234  190179 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 09:33:40.912299  190179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:33:40.937236  190179 system_svc.go:56] duration metric: took 24.99324ms WaitForService to wait for kubelet
	I1025 09:33:40.937274  190179 kubeadm.go:586] duration metric: took 18.773645035s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:33:40.937297  190179 node_conditions.go:102] verifying NodePressure condition ...
	I1025 09:33:40.968738  190179 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 09:33:40.968792  190179 node_conditions.go:123] node cpu capacity is 2
	I1025 09:33:40.968806  190179 node_conditions.go:105] duration metric: took 31.503452ms to run NodePressure ...
	I1025 09:33:40.968819  190179 start.go:241] waiting for startup goroutines ...
	I1025 09:33:40.968829  190179 start.go:246] waiting for cluster config update ...
	I1025 09:33:40.968841  190179 start.go:255] writing updated cluster config ...
	I1025 09:33:40.969201  190179 ssh_runner.go:195] Run: rm -f paused
	I1025 09:33:40.975266  190179 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:33:40.985549  190179 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-b266v" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:33:40.993216  190179 pod_ready.go:94] pod "coredns-66bc5c9577-b266v" is "Ready"
	I1025 09:33:40.993243  190179 pod_ready.go:86] duration metric: took 7.667848ms for pod "coredns-66bc5c9577-b266v" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:33:40.995442  190179 pod_ready.go:83] waiting for pod "etcd-no-preload-179869" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:33:41.000142  190179 pod_ready.go:94] pod "etcd-no-preload-179869" is "Ready"
	I1025 09:33:41.000188  190179 pod_ready.go:86] duration metric: took 4.709131ms for pod "etcd-no-preload-179869" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:33:41.004281  190179 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-179869" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:33:41.010240  190179 pod_ready.go:94] pod "kube-apiserver-no-preload-179869" is "Ready"
	I1025 09:33:41.010271  190179 pod_ready.go:86] duration metric: took 5.957172ms for pod "kube-apiserver-no-preload-179869" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:33:41.013017  190179 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-179869" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:33:41.382141  190179 pod_ready.go:94] pod "kube-controller-manager-no-preload-179869" is "Ready"
	I1025 09:33:41.382208  190179 pod_ready.go:86] duration metric: took 369.164388ms for pod "kube-controller-manager-no-preload-179869" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:33:41.580319  190179 pod_ready.go:83] waiting for pod "kube-proxy-7xf9w" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:33:41.980395  190179 pod_ready.go:94] pod "kube-proxy-7xf9w" is "Ready"
	I1025 09:33:41.980481  190179 pod_ready.go:86] duration metric: took 400.134557ms for pod "kube-proxy-7xf9w" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:33:42.187575  190179 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-179869" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:33:42.579237  190179 pod_ready.go:94] pod "kube-scheduler-no-preload-179869" is "Ready"
	I1025 09:33:42.579261  190179 pod_ready.go:86] duration metric: took 391.650096ms for pod "kube-scheduler-no-preload-179869" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:33:42.579273  190179 pod_ready.go:40] duration metric: took 1.603965142s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:33:42.630406  190179 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1025 09:33:42.633692  190179 out.go:179] * Done! kubectl is now configured to use "no-preload-179869" cluster and "default" namespace by default
	W1025 09:33:42.528326  193057 node_ready.go:57] node "embed-certs-173264" has "Ready":"False" status (will retry)
	W1025 09:33:45.035249  193057 node_ready.go:57] node "embed-certs-173264" has "Ready":"False" status (will retry)
	W1025 09:33:47.527643  193057 node_ready.go:57] node "embed-certs-173264" has "Ready":"False" status (will retry)
	W1025 09:33:49.528761  193057 node_ready.go:57] node "embed-certs-173264" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 25 09:33:40 no-preload-179869 crio[842]: time="2025-10-25T09:33:40.087735011Z" level=info msg="Created container 9a2e10b3e77e45c5ed4b3dbf9795557a8bcd2c6ed54a5f765edb48dc950198b0: kube-system/coredns-66bc5c9577-b266v/coredns" id=11a3e60a-5890-48a4-a122-a0464bb782d2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:33:40 no-preload-179869 crio[842]: time="2025-10-25T09:33:40.088454208Z" level=info msg="Starting container: 9a2e10b3e77e45c5ed4b3dbf9795557a8bcd2c6ed54a5f765edb48dc950198b0" id=d1a7dd57-15d5-4dac-b17b-5e4f69c080ba name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:33:40 no-preload-179869 crio[842]: time="2025-10-25T09:33:40.095182541Z" level=info msg="Started container" PID=2482 containerID=9a2e10b3e77e45c5ed4b3dbf9795557a8bcd2c6ed54a5f765edb48dc950198b0 description=kube-system/coredns-66bc5c9577-b266v/coredns id=d1a7dd57-15d5-4dac-b17b-5e4f69c080ba name=/runtime.v1.RuntimeService/StartContainer sandboxID=966e136fef663eafc0fbebf0deff74626f9f93b1f892bc59eaee1f65cf3ab585
	Oct 25 09:33:43 no-preload-179869 crio[842]: time="2025-10-25T09:33:43.174699515Z" level=info msg="Running pod sandbox: default/busybox/POD" id=f6cc34cb-73c3-4c25-9324-bf9fbbc50cdb name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:33:43 no-preload-179869 crio[842]: time="2025-10-25T09:33:43.174780649Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:33:43 no-preload-179869 crio[842]: time="2025-10-25T09:33:43.179925904Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:3d4cf2ea99c14a5602ef47ba070fee6cae29207cd0a455c5aaf5ab79947151ca UID:c2e838fa-d7c8-4aaa-822c-b07461356def NetNS:/var/run/netns/e42897a1-a386-4fa3-b6c7-7b4320c9ae6b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400151e950}] Aliases:map[]}"
	Oct 25 09:33:43 no-preload-179869 crio[842]: time="2025-10-25T09:33:43.179964617Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 25 09:33:43 no-preload-179869 crio[842]: time="2025-10-25T09:33:43.193201447Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:3d4cf2ea99c14a5602ef47ba070fee6cae29207cd0a455c5aaf5ab79947151ca UID:c2e838fa-d7c8-4aaa-822c-b07461356def NetNS:/var/run/netns/e42897a1-a386-4fa3-b6c7-7b4320c9ae6b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400151e950}] Aliases:map[]}"
	Oct 25 09:33:43 no-preload-179869 crio[842]: time="2025-10-25T09:33:43.193372526Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 25 09:33:43 no-preload-179869 crio[842]: time="2025-10-25T09:33:43.196911615Z" level=info msg="Ran pod sandbox 3d4cf2ea99c14a5602ef47ba070fee6cae29207cd0a455c5aaf5ab79947151ca with infra container: default/busybox/POD" id=f6cc34cb-73c3-4c25-9324-bf9fbbc50cdb name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:33:43 no-preload-179869 crio[842]: time="2025-10-25T09:33:43.197970263Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1516d15b-044c-404f-893f-36d12d2b49b2 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:33:43 no-preload-179869 crio[842]: time="2025-10-25T09:33:43.198158162Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=1516d15b-044c-404f-893f-36d12d2b49b2 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:33:43 no-preload-179869 crio[842]: time="2025-10-25T09:33:43.198203915Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=1516d15b-044c-404f-893f-36d12d2b49b2 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:33:43 no-preload-179869 crio[842]: time="2025-10-25T09:33:43.200755947Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1cf3e344-3691-4bc9-9f12-ae2d143b742d name=/runtime.v1.ImageService/PullImage
	Oct 25 09:33:43 no-preload-179869 crio[842]: time="2025-10-25T09:33:43.203052986Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 25 09:33:45 no-preload-179869 crio[842]: time="2025-10-25T09:33:45.548732132Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=1cf3e344-3691-4bc9-9f12-ae2d143b742d name=/runtime.v1.ImageService/PullImage
	Oct 25 09:33:45 no-preload-179869 crio[842]: time="2025-10-25T09:33:45.549438086Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3c465aea-6714-43bf-a5b6-ee459450d8d8 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:33:45 no-preload-179869 crio[842]: time="2025-10-25T09:33:45.550938667Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=43dd6361-2470-4971-a85d-5c99b6f7cef6 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:33:45 no-preload-179869 crio[842]: time="2025-10-25T09:33:45.557964841Z" level=info msg="Creating container: default/busybox/busybox" id=facaf57c-890e-4992-9088-40f1a385e0db name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:33:45 no-preload-179869 crio[842]: time="2025-10-25T09:33:45.558132031Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:33:45 no-preload-179869 crio[842]: time="2025-10-25T09:33:45.563230254Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:33:45 no-preload-179869 crio[842]: time="2025-10-25T09:33:45.563721961Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:33:45 no-preload-179869 crio[842]: time="2025-10-25T09:33:45.577832886Z" level=info msg="Created container 9dd05085c21ffddce5e72b1fe50dd58fd453200c3178144a69118b7d9561a37a: default/busybox/busybox" id=facaf57c-890e-4992-9088-40f1a385e0db name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:33:45 no-preload-179869 crio[842]: time="2025-10-25T09:33:45.578979419Z" level=info msg="Starting container: 9dd05085c21ffddce5e72b1fe50dd58fd453200c3178144a69118b7d9561a37a" id=b5082542-0414-4571-be52-09350a4c36c2 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:33:45 no-preload-179869 crio[842]: time="2025-10-25T09:33:45.58133291Z" level=info msg="Started container" PID=2538 containerID=9dd05085c21ffddce5e72b1fe50dd58fd453200c3178144a69118b7d9561a37a description=default/busybox/busybox id=b5082542-0414-4571-be52-09350a4c36c2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3d4cf2ea99c14a5602ef47ba070fee6cae29207cd0a455c5aaf5ab79947151ca
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	9dd05085c21ff       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago       Running             busybox                   0                   3d4cf2ea99c14       busybox                                     default
	9a2e10b3e77e4       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      13 seconds ago      Running             coredns                   0                   966e136fef663       coredns-66bc5c9577-b266v                    kube-system
	7cf2bd3328601       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                      13 seconds ago      Running             storage-provisioner       0                   2e9f48be7577f       storage-provisioner                         kube-system
	b9a9ed4384609       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    24 seconds ago      Running             kindnet-cni               0                   28efe60a893be       kindnet-qjcqv                               kube-system
	ab7f1e190d621       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      28 seconds ago      Running             kube-proxy                0                   df7d85ffe0636       kube-proxy-7xf9w                            kube-system
	9ff5a95b4ce2d       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      45 seconds ago      Running             kube-scheduler            0                   fd0a244653a00       kube-scheduler-no-preload-179869            kube-system
	d272f5b72c44a       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      45 seconds ago      Running             kube-controller-manager   0                   799dc7870fa1e       kube-controller-manager-no-preload-179869   kube-system
	56cdd18908933       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      45 seconds ago      Running             etcd                      0                   a3c43802afe0a       etcd-no-preload-179869                      kube-system
	06b544d6ff183       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      45 seconds ago      Running             kube-apiserver            0                   4170274cd94ec       kube-apiserver-no-preload-179869            kube-system
	
	
	==> coredns [9a2e10b3e77e45c5ed4b3dbf9795557a8bcd2c6ed54a5f765edb48dc950198b0] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55995 - 19285 "HINFO IN 8518809328473902394.5060952821813123864. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016864438s
	
	
	==> describe nodes <==
	Name:               no-preload-179869
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-179869
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3de5ce1075e776fa55a26fe8396669cc53a4373
	                    minikube.k8s.io/name=no-preload-179869
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_33_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:33:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-179869
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:33:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:33:48 +0000   Sat, 25 Oct 2025 09:33:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:33:48 +0000   Sat, 25 Oct 2025 09:33:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:33:48 +0000   Sat, 25 Oct 2025 09:33:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:33:48 +0000   Sat, 25 Oct 2025 09:33:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-179869
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                ea4ee067-c337-4055-9d54-e11f82ef0c5b
	  Boot ID:                    89aa6167-e700-462e-8969-7739a9f33dc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-b266v                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     31s
	  kube-system                 etcd-no-preload-179869                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         36s
	  kube-system                 kindnet-qjcqv                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      31s
	  kube-system                 kube-apiserver-no-preload-179869             250m (12%)    0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-controller-manager-no-preload-179869    200m (10%)    0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-proxy-7xf9w                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-scheduler-no-preload-179869             100m (5%)     0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 28s                kube-proxy       
	  Normal   Starting                 47s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 47s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  46s (x8 over 46s)  kubelet          Node no-preload-179869 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    46s (x8 over 46s)  kubelet          Node no-preload-179869 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     46s (x8 over 46s)  kubelet          Node no-preload-179869 status is now: NodeHasSufficientPID
	  Normal   Starting                 36s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 36s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  36s                kubelet          Node no-preload-179869 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    36s                kubelet          Node no-preload-179869 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     36s                kubelet          Node no-preload-179869 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           32s                node-controller  Node no-preload-179869 event: Registered Node no-preload-179869 in Controller
	  Normal   NodeReady                14s                kubelet          Node no-preload-179869 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct25 09:09] overlayfs: idmapped layers are currently not supported
	[Oct25 09:10] overlayfs: idmapped layers are currently not supported
	[Oct25 09:11] overlayfs: idmapped layers are currently not supported
	[Oct25 09:13] overlayfs: idmapped layers are currently not supported
	[ +18.632418] overlayfs: idmapped layers are currently not supported
	[ +13.271347] overlayfs: idmapped layers are currently not supported
	[Oct25 09:14] overlayfs: idmapped layers are currently not supported
	[Oct25 09:15] overlayfs: idmapped layers are currently not supported
	[ +40.253798] overlayfs: idmapped layers are currently not supported
	[Oct25 09:16] overlayfs: idmapped layers are currently not supported
	[Oct25 09:17] overlayfs: idmapped layers are currently not supported
	[Oct25 09:18] overlayfs: idmapped layers are currently not supported
	[  +9.191019] hrtimer: interrupt took 13462946 ns
	[Oct25 09:20] overlayfs: idmapped layers are currently not supported
	[Oct25 09:22] overlayfs: idmapped layers are currently not supported
	[Oct25 09:23] overlayfs: idmapped layers are currently not supported
	[Oct25 09:24] overlayfs: idmapped layers are currently not supported
	[Oct25 09:28] overlayfs: idmapped layers are currently not supported
	[ +37.283444] overlayfs: idmapped layers are currently not supported
	[Oct25 09:29] overlayfs: idmapped layers are currently not supported
	[ +38.328802] overlayfs: idmapped layers are currently not supported
	[Oct25 09:30] overlayfs: idmapped layers are currently not supported
	[Oct25 09:31] overlayfs: idmapped layers are currently not supported
	[Oct25 09:32] overlayfs: idmapped layers are currently not supported
	[Oct25 09:33] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [56cdd18908933a80d13bdeeb08b2429295420d5bd593237ec66edd137e1b6b8d] <==
	{"level":"warn","ts":"2025-10-25T09:33:11.001291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:11.033532Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:11.063641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:11.150538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:11.196951Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:11.233440Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:11.290588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:11.326274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:11.430197Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:11.483858Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:11.513073Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:11.560018Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:11.589229Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:11.647399Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:11.676980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:11.721167Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:11.769277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:11.798963Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:11.860250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:11.898110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:11.968143Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:12.074156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:12.090495Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:12.127138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:12.266905Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45658","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:33:53 up  1:16,  0 user,  load average: 6.01, 3.81, 2.90
	Linux no-preload-179869 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b9a9ed438460973bcb5c19dc3cb266b62bc4e86083090149105eb07e0ce3db75] <==
	I1025 09:33:29.116038       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 09:33:29.117037       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1025 09:33:29.117219       1 main.go:148] setting mtu 1500 for CNI 
	I1025 09:33:29.117238       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 09:33:29.117253       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T09:33:29Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 09:33:29.323032       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 09:33:29.323121       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 09:33:29.323161       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 09:33:29.323335       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1025 09:33:29.614081       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 09:33:29.614190       1 metrics.go:72] Registering metrics
	I1025 09:33:29.614292       1 controller.go:711] "Syncing nftables rules"
	I1025 09:33:39.320904       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1025 09:33:39.320943       1 main.go:301] handling current node
	I1025 09:33:49.319856       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1025 09:33:49.319963       1 main.go:301] handling current node
	
	
	==> kube-apiserver [06b544d6ff183bc90e25d98f995b2e253eb84d59149c194ba5edfdc5832f051c] <==
	I1025 09:33:14.202034       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 09:33:14.426678       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1025 09:33:14.468831       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1025 09:33:14.469321       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 09:33:16.102627       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 09:33:16.165543       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 09:33:16.274293       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1025 09:33:16.283947       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1025 09:33:16.285046       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 09:33:16.289868       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 09:33:16.407086       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	E1025 09:33:17.120415       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	{"level":"warn","ts":"2025-10-25T09:33:17.122985Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40011fe3c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Txn","attempt":0,"error":"rpc error: code = Canceled desc = context canceled"}
	E1025 09:33:17.129892       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 6.611762ms, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E1025 09:33:17.129387       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1025 09:33:17.131485       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1025 09:33:17.131692       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="11.384157ms" method="PATCH" path="/api/v1/namespaces/kube-system/pods/kube-apiserver-no-preload-179869/status" result=null
	I1025 09:33:17.274820       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 09:33:17.310860       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1025 09:33:17.330692       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1025 09:33:21.570344       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:33:21.584036       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:33:22.637281       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 09:33:22.645421       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1025 09:33:51.994704       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:45024: use of closed network connection
	
	
	==> kube-controller-manager [d272f5b72c44a1e09a32c64c42d1e1d0e6eee92d58a29484c98d718f6b844879] <==
	I1025 09:33:21.500232       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1025 09:33:21.502058       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1025 09:33:21.510140       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1025 09:33:21.510237       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1025 09:33:21.510339       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:33:21.510350       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1025 09:33:21.517809       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1025 09:33:21.522105       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:33:21.533742       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 09:33:21.534015       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1025 09:33:21.534078       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1025 09:33:21.534251       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1025 09:33:21.534327       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-179869"
	I1025 09:33:21.535142       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1025 09:33:21.534359       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1025 09:33:21.534346       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1025 09:33:21.536777       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1025 09:33:21.541522       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1025 09:33:21.534965       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1025 09:33:21.556587       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:33:21.556689       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 09:33:21.556718       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 09:33:21.561692       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-179869" podCIDRs=["10.244.0.0/24"]
	I1025 09:33:21.616670       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:33:41.538182       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [ab7f1e190d621baf84ae625a9a8863164fe45ff4d52848ee625262ee0424bde8] <==
	I1025 09:33:25.140707       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:33:25.233180       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:33:25.248216       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:33:25.248246       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1025 09:33:25.248343       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:33:25.284529       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:33:25.284610       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:33:25.290865       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:33:25.291186       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:33:25.291201       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:33:25.292729       1 config.go:200] "Starting service config controller"
	I1025 09:33:25.292739       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:33:25.292754       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:33:25.292758       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:33:25.292769       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:33:25.292773       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:33:25.293397       1 config.go:309] "Starting node config controller"
	I1025 09:33:25.293403       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:33:25.293409       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:33:25.393036       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 09:33:25.393070       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 09:33:25.393123       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [9ff5a95b4ce2da8b5bfa905c19280adadb3ab8eefdb3cdc077a2304b48e08a51] <==
	I1025 09:33:15.295957       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:33:15.299157       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 09:33:15.299281       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:33:15.299317       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:33:15.299340       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1025 09:33:15.319845       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 09:33:15.319921       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 09:33:15.319964       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 09:33:15.320011       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 09:33:15.320066       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 09:33:15.327471       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1025 09:33:15.330291       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 09:33:15.330374       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 09:33:15.330475       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 09:33:15.330538       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 09:33:15.330591       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1025 09:33:15.344848       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1025 09:33:15.344941       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 09:33:15.344984       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 09:33:15.345059       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 09:33:15.345747       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 09:33:15.345836       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 09:33:15.346014       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 09:33:15.346145       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1025 09:33:16.802473       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 09:33:23 no-preload-179869 kubelet[2006]: I1025 09:33:23.025046    2006 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/61407858-e6fa-4653-84c8-b20276862f78-lib-modules\") pod \"kube-proxy-7xf9w\" (UID: \"61407858-e6fa-4653-84c8-b20276862f78\") " pod="kube-system/kube-proxy-7xf9w"
	Oct 25 09:33:23 no-preload-179869 kubelet[2006]: I1025 09:33:23.025395    2006 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/61407858-e6fa-4653-84c8-b20276862f78-kube-proxy\") pod \"kube-proxy-7xf9w\" (UID: \"61407858-e6fa-4653-84c8-b20276862f78\") " pod="kube-system/kube-proxy-7xf9w"
	Oct 25 09:33:23 no-preload-179869 kubelet[2006]: I1025 09:33:23.025428    2006 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mlzf\" (UniqueName: \"kubernetes.io/projected/61407858-e6fa-4653-84c8-b20276862f78-kube-api-access-8mlzf\") pod \"kube-proxy-7xf9w\" (UID: \"61407858-e6fa-4653-84c8-b20276862f78\") " pod="kube-system/kube-proxy-7xf9w"
	Oct 25 09:33:23 no-preload-179869 kubelet[2006]: I1025 09:33:23.025484    2006 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/61407858-e6fa-4653-84c8-b20276862f78-xtables-lock\") pod \"kube-proxy-7xf9w\" (UID: \"61407858-e6fa-4653-84c8-b20276862f78\") " pod="kube-system/kube-proxy-7xf9w"
	Oct 25 09:33:24 no-preload-179869 kubelet[2006]: E1025 09:33:24.126529    2006 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Oct 25 09:33:24 no-preload-179869 kubelet[2006]: E1025 09:33:24.126580    2006 projected.go:196] Error preparing data for projected volume kube-api-access-sgjrs for pod kube-system/kindnet-qjcqv: failed to sync configmap cache: timed out waiting for the condition
	Oct 25 09:33:24 no-preload-179869 kubelet[2006]: E1025 09:33:24.126671    2006 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f0b6489-3c8c-4913-8326-531c043c9c46-kube-api-access-sgjrs podName:0f0b6489-3c8c-4913-8326-531c043c9c46 nodeName:}" failed. No retries permitted until 2025-10-25 09:33:24.626645771 +0000 UTC m=+7.438559255 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-sgjrs" (UniqueName: "kubernetes.io/projected/0f0b6489-3c8c-4913-8326-531c043c9c46-kube-api-access-sgjrs") pod "kindnet-qjcqv" (UID: "0f0b6489-3c8c-4913-8326-531c043c9c46") : failed to sync configmap cache: timed out waiting for the condition
	Oct 25 09:33:24 no-preload-179869 kubelet[2006]: E1025 09:33:24.186994    2006 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Oct 25 09:33:24 no-preload-179869 kubelet[2006]: E1025 09:33:24.187041    2006 projected.go:196] Error preparing data for projected volume kube-api-access-8mlzf for pod kube-system/kube-proxy-7xf9w: failed to sync configmap cache: timed out waiting for the condition
	Oct 25 09:33:24 no-preload-179869 kubelet[2006]: E1025 09:33:24.187119    2006 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/61407858-e6fa-4653-84c8-b20276862f78-kube-api-access-8mlzf podName:61407858-e6fa-4653-84c8-b20276862f78 nodeName:}" failed. No retries permitted until 2025-10-25 09:33:24.687091462 +0000 UTC m=+7.499004954 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8mlzf" (UniqueName: "kubernetes.io/projected/61407858-e6fa-4653-84c8-b20276862f78-kube-api-access-8mlzf") pod "kube-proxy-7xf9w" (UID: "61407858-e6fa-4653-84c8-b20276862f78") : failed to sync configmap cache: timed out waiting for the condition
	Oct 25 09:33:24 no-preload-179869 kubelet[2006]: I1025 09:33:24.669964    2006 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 25 09:33:25 no-preload-179869 kubelet[2006]: W1025 09:33:25.024999    2006 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/021c28390d46e140cb149a266cb60178228c9cf40f85185de0d05f970fd0ddea/crio-df7d85ffe0636da3c5c21b54717658e8e97e30838e279ec503ca49bb5582cd2d WatchSource:0}: Error finding container df7d85ffe0636da3c5c21b54717658e8e97e30838e279ec503ca49bb5582cd2d: Status 404 returned error can't find the container with id df7d85ffe0636da3c5c21b54717658e8e97e30838e279ec503ca49bb5582cd2d
	Oct 25 09:33:25 no-preload-179869 kubelet[2006]: I1025 09:33:25.810673    2006 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7xf9w" podStartSLOduration=3.810654684 podStartE2EDuration="3.810654684s" podCreationTimestamp="2025-10-25 09:33:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:33:25.810300062 +0000 UTC m=+8.622213563" watchObservedRunningTime="2025-10-25 09:33:25.810654684 +0000 UTC m=+8.622568176"
	Oct 25 09:33:39 no-preload-179869 kubelet[2006]: I1025 09:33:39.640741    2006 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 25 09:33:39 no-preload-179869 kubelet[2006]: I1025 09:33:39.669403    2006 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-qjcqv" podStartSLOduration=13.442498467 podStartE2EDuration="17.669383336s" podCreationTimestamp="2025-10-25 09:33:22 +0000 UTC" firstStartedPulling="2025-10-25 09:33:24.710331349 +0000 UTC m=+7.522244832" lastFinishedPulling="2025-10-25 09:33:28.937216217 +0000 UTC m=+11.749129701" observedRunningTime="2025-10-25 09:33:29.839893182 +0000 UTC m=+12.651806683" watchObservedRunningTime="2025-10-25 09:33:39.669383336 +0000 UTC m=+22.481296828"
	Oct 25 09:33:39 no-preload-179869 kubelet[2006]: I1025 09:33:39.807686    2006 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d1a6a6b7-d2c0-444d-9175-6d41c4ef8fb3-config-volume\") pod \"coredns-66bc5c9577-b266v\" (UID: \"d1a6a6b7-d2c0-444d-9175-6d41c4ef8fb3\") " pod="kube-system/coredns-66bc5c9577-b266v"
	Oct 25 09:33:39 no-preload-179869 kubelet[2006]: I1025 09:33:39.807938    2006 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjj7t\" (UniqueName: \"kubernetes.io/projected/cf5a7700-d7da-4636-9fd4-863fbc14f1bf-kube-api-access-vjj7t\") pod \"storage-provisioner\" (UID: \"cf5a7700-d7da-4636-9fd4-863fbc14f1bf\") " pod="kube-system/storage-provisioner"
	Oct 25 09:33:39 no-preload-179869 kubelet[2006]: I1025 09:33:39.808027    2006 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9fr6\" (UniqueName: \"kubernetes.io/projected/d1a6a6b7-d2c0-444d-9175-6d41c4ef8fb3-kube-api-access-x9fr6\") pod \"coredns-66bc5c9577-b266v\" (UID: \"d1a6a6b7-d2c0-444d-9175-6d41c4ef8fb3\") " pod="kube-system/coredns-66bc5c9577-b266v"
	Oct 25 09:33:39 no-preload-179869 kubelet[2006]: I1025 09:33:39.808242    2006 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/cf5a7700-d7da-4636-9fd4-863fbc14f1bf-tmp\") pod \"storage-provisioner\" (UID: \"cf5a7700-d7da-4636-9fd4-863fbc14f1bf\") " pod="kube-system/storage-provisioner"
	Oct 25 09:33:39 no-preload-179869 kubelet[2006]: W1025 09:33:39.988387    2006 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/021c28390d46e140cb149a266cb60178228c9cf40f85185de0d05f970fd0ddea/crio-2e9f48be7577f8a9449c21b43d2ac97c7fd7fa34c94feb8df8d784798e39c665 WatchSource:0}: Error finding container 2e9f48be7577f8a9449c21b43d2ac97c7fd7fa34c94feb8df8d784798e39c665: Status 404 returned error can't find the container with id 2e9f48be7577f8a9449c21b43d2ac97c7fd7fa34c94feb8df8d784798e39c665
	Oct 25 09:33:40 no-preload-179869 kubelet[2006]: W1025 09:33:40.036547    2006 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/021c28390d46e140cb149a266cb60178228c9cf40f85185de0d05f970fd0ddea/crio-966e136fef663eafc0fbebf0deff74626f9f93b1f892bc59eaee1f65cf3ab585 WatchSource:0}: Error finding container 966e136fef663eafc0fbebf0deff74626f9f93b1f892bc59eaee1f65cf3ab585: Status 404 returned error can't find the container with id 966e136fef663eafc0fbebf0deff74626f9f93b1f892bc59eaee1f65cf3ab585
	Oct 25 09:33:40 no-preload-179869 kubelet[2006]: I1025 09:33:40.889017    2006 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-b266v" podStartSLOduration=18.888995876 podStartE2EDuration="18.888995876s" podCreationTimestamp="2025-10-25 09:33:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:33:40.864228887 +0000 UTC m=+23.676142379" watchObservedRunningTime="2025-10-25 09:33:40.888995876 +0000 UTC m=+23.700909376"
	Oct 25 09:33:40 no-preload-179869 kubelet[2006]: I1025 09:33:40.922330    2006 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=16.922311085 podStartE2EDuration="16.922311085s" podCreationTimestamp="2025-10-25 09:33:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:33:40.89204494 +0000 UTC m=+23.703958424" watchObservedRunningTime="2025-10-25 09:33:40.922311085 +0000 UTC m=+23.734224568"
	Oct 25 09:33:43 no-preload-179869 kubelet[2006]: I1025 09:33:43.031920    2006 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5sc5\" (UniqueName: \"kubernetes.io/projected/c2e838fa-d7c8-4aaa-822c-b07461356def-kube-api-access-q5sc5\") pod \"busybox\" (UID: \"c2e838fa-d7c8-4aaa-822c-b07461356def\") " pod="default/busybox"
	Oct 25 09:33:43 no-preload-179869 kubelet[2006]: W1025 09:33:43.195504    2006 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/021c28390d46e140cb149a266cb60178228c9cf40f85185de0d05f970fd0ddea/crio-3d4cf2ea99c14a5602ef47ba070fee6cae29207cd0a455c5aaf5ab79947151ca WatchSource:0}: Error finding container 3d4cf2ea99c14a5602ef47ba070fee6cae29207cd0a455c5aaf5ab79947151ca: Status 404 returned error can't find the container with id 3d4cf2ea99c14a5602ef47ba070fee6cae29207cd0a455c5aaf5ab79947151ca
	
	
	==> storage-provisioner [7cf2bd33286014cbc0110f253efeb252ba1073a8e9c5c66f5b723fd344fa433e] <==
	I1025 09:33:40.078318       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 09:33:40.101142       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 09:33:40.101190       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1025 09:33:40.112908       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:33:40.120492       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 09:33:40.120959       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 09:33:40.123939       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-179869_185289be-0ce6-42f6-a91f-08fd0006802c!
	I1025 09:33:40.131821       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6556c1b4-f5ed-4c56-8bcf-e108cc1d8bad", APIVersion:"v1", ResourceVersion:"454", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-179869_185289be-0ce6-42f6-a91f-08fd0006802c became leader
	W1025 09:33:40.135272       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:33:40.168975       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 09:33:40.224729       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-179869_185289be-0ce6-42f6-a91f-08fd0006802c!
	W1025 09:33:42.174364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:33:42.180572       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:33:44.183546       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:33:44.188014       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:33:46.191370       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:33:46.196122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:33:48.199706       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:33:48.204697       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:33:50.208038       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:33:50.214927       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:33:52.218247       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:33:52.225270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-179869 -n no-preload-179869
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-179869 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-173264 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-173264 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (364.969885ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:34:20Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-173264 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-173264 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-173264 describe deploy/metrics-server -n kube-system: exit status 1 (131.185352ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-173264 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-173264
helpers_test.go:243: (dbg) docker inspect embed-certs-173264:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7ab6ed1b9ea6708ddac2c85d334494b29a2d55d0bac7a2069e7f45087d3443ef",
	        "Created": "2025-10-25T09:32:48.526873954Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 193591,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:32:48.607524389Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/7ab6ed1b9ea6708ddac2c85d334494b29a2d55d0bac7a2069e7f45087d3443ef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7ab6ed1b9ea6708ddac2c85d334494b29a2d55d0bac7a2069e7f45087d3443ef/hostname",
	        "HostsPath": "/var/lib/docker/containers/7ab6ed1b9ea6708ddac2c85d334494b29a2d55d0bac7a2069e7f45087d3443ef/hosts",
	        "LogPath": "/var/lib/docker/containers/7ab6ed1b9ea6708ddac2c85d334494b29a2d55d0bac7a2069e7f45087d3443ef/7ab6ed1b9ea6708ddac2c85d334494b29a2d55d0bac7a2069e7f45087d3443ef-json.log",
	        "Name": "/embed-certs-173264",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-173264:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-173264",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7ab6ed1b9ea6708ddac2c85d334494b29a2d55d0bac7a2069e7f45087d3443ef",
	                "LowerDir": "/var/lib/docker/overlay2/f31af6c3318ffc600cbea3cfd23719cc69a1f1792d31e48077fe84ae405b9fc8-init/diff:/var/lib/docker/overlay2/cef79fc16ca3e257f9c9390d34fae091a7f7fa219913b2606caf1922ea50ed93/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f31af6c3318ffc600cbea3cfd23719cc69a1f1792d31e48077fe84ae405b9fc8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f31af6c3318ffc600cbea3cfd23719cc69a1f1792d31e48077fe84ae405b9fc8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f31af6c3318ffc600cbea3cfd23719cc69a1f1792d31e48077fe84ae405b9fc8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-173264",
	                "Source": "/var/lib/docker/volumes/embed-certs-173264/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-173264",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-173264",
	                "name.minikube.sigs.k8s.io": "embed-certs-173264",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4a10c768efc6de459c55ce22837c6fbf0b7b7b05dd9dde41f77a79d014e68439",
	            "SandboxKey": "/var/run/docker/netns/4a10c768efc6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-173264": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8a:85:93:19:72:27",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2d181aa3ece229a97886c4873dbb8eca8797c23a56c68ee43959cebc56f78ff8",
	                    "EndpointID": "ae3e433a65083db417141a74e180c4917559770ac813c6a6e706c9fcc8d011c8",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-173264",
	                        "7ab6ed1b9ea6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-173264 -n embed-certs-173264
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-173264 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-173264 logs -n 25: (1.719229285s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ delete  │ -p force-systemd-flag-100847                                                                                                                                                                                                                  │ force-systemd-flag-100847 │ jenkins │ v1.37.0 │ 25 Oct 25 09:28 UTC │ 25 Oct 25 09:28 UTC │
	│ start   │ -p cert-expiration-440252 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-440252    │ jenkins │ v1.37.0 │ 25 Oct 25 09:28 UTC │ 25 Oct 25 09:29 UTC │
	│ delete  │ -p force-systemd-env-991333                                                                                                                                                                                                                   │ force-systemd-env-991333  │ jenkins │ v1.37.0 │ 25 Oct 25 09:29 UTC │ 25 Oct 25 09:29 UTC │
	│ start   │ -p cert-options-483456 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-483456       │ jenkins │ v1.37.0 │ 25 Oct 25 09:29 UTC │ 25 Oct 25 09:29 UTC │
	│ ssh     │ cert-options-483456 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-483456       │ jenkins │ v1.37.0 │ 25 Oct 25 09:29 UTC │ 25 Oct 25 09:29 UTC │
	│ ssh     │ -p cert-options-483456 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-483456       │ jenkins │ v1.37.0 │ 25 Oct 25 09:29 UTC │ 25 Oct 25 09:29 UTC │
	│ delete  │ -p cert-options-483456                                                                                                                                                                                                                        │ cert-options-483456       │ jenkins │ v1.37.0 │ 25 Oct 25 09:29 UTC │ 25 Oct 25 09:29 UTC │
	│ start   │ -p old-k8s-version-881642 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-881642    │ jenkins │ v1.37.0 │ 25 Oct 25 09:29 UTC │ 25 Oct 25 09:30 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-881642 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-881642    │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │                     │
	│ stop    │ -p old-k8s-version-881642 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-881642    │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │ 25 Oct 25 09:31 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-881642 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-881642    │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │ 25 Oct 25 09:31 UTC │
	│ start   │ -p old-k8s-version-881642 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-881642    │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │ 25 Oct 25 09:32 UTC │
	│ image   │ old-k8s-version-881642 image list --format=json                                                                                                                                                                                               │ old-k8s-version-881642    │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:32 UTC │
	│ pause   │ -p old-k8s-version-881642 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-881642    │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │                     │
	│ start   │ -p cert-expiration-440252 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-440252    │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:32 UTC │
	│ delete  │ -p old-k8s-version-881642                                                                                                                                                                                                                     │ old-k8s-version-881642    │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:32 UTC │
	│ delete  │ -p old-k8s-version-881642                                                                                                                                                                                                                     │ old-k8s-version-881642    │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:32 UTC │
	│ start   │ -p no-preload-179869 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-179869         │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:33 UTC │
	│ delete  │ -p cert-expiration-440252                                                                                                                                                                                                                     │ cert-expiration-440252    │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:32 UTC │
	│ start   │ -p embed-certs-173264 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-173264        │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:34 UTC │
	│ addons  │ enable metrics-server -p no-preload-179869 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-179869         │ jenkins │ v1.37.0 │ 25 Oct 25 09:33 UTC │                     │
	│ stop    │ -p no-preload-179869 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-179869         │ jenkins │ v1.37.0 │ 25 Oct 25 09:33 UTC │ 25 Oct 25 09:34 UTC │
	│ addons  │ enable dashboard -p no-preload-179869 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-179869         │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │ 25 Oct 25 09:34 UTC │
	│ start   │ -p no-preload-179869 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-179869         │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-173264 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-173264        │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:34:06
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:34:06.839295  197465 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:34:06.839843  197465 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:34:06.839861  197465 out.go:374] Setting ErrFile to fd 2...
	I1025 09:34:06.839867  197465 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:34:06.840406  197465 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-2312/.minikube/bin
	I1025 09:34:06.840828  197465 out.go:368] Setting JSON to false
	I1025 09:34:06.841742  197465 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4598,"bootTime":1761380249,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1025 09:34:06.841813  197465 start.go:141] virtualization:  
	I1025 09:34:06.845029  197465 out.go:179] * [no-preload-179869] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 09:34:06.848955  197465 out.go:179]   - MINIKUBE_LOCATION=21796
	I1025 09:34:06.849067  197465 notify.go:220] Checking for updates...
	I1025 09:34:06.854972  197465 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:34:06.857885  197465 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21796-2312/kubeconfig
	I1025 09:34:06.860784  197465 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-2312/.minikube
	I1025 09:34:06.863688  197465 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 09:34:06.866591  197465 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:34:06.870032  197465 config.go:182] Loaded profile config "no-preload-179869": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:34:06.870603  197465 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:34:06.894682  197465 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 09:34:06.894809  197465 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:34:06.950691  197465 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 09:34:06.940463319 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:34:06.950802  197465 docker.go:318] overlay module found
	I1025 09:34:06.954110  197465 out.go:179] * Using the docker driver based on existing profile
	I1025 09:34:06.957038  197465 start.go:305] selected driver: docker
	I1025 09:34:06.957062  197465 start.go:925] validating driver "docker" against &{Name:no-preload-179869 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-179869 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:34:06.957166  197465 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:34:06.957942  197465 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:34:07.015974  197465 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 09:34:07.006560972 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:34:07.016316  197465 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:34:07.016344  197465 cni.go:84] Creating CNI manager for ""
	I1025 09:34:07.016403  197465 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:34:07.016458  197465 start.go:349] cluster config:
	{Name:no-preload-179869 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-179869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:34:07.019661  197465 out.go:179] * Starting "no-preload-179869" primary control-plane node in "no-preload-179869" cluster
	I1025 09:34:07.022557  197465 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 09:34:07.025578  197465 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:34:07.029675  197465 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:34:07.029773  197465 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:34:07.029835  197465 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/no-preload-179869/config.json ...
	I1025 09:34:07.030210  197465 cache.go:107] acquiring lock: {Name:mk30111202a80727cc518d1e629922397fb5315e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:34:07.030300  197465 cache.go:115] /home/jenkins/minikube-integration/21796-2312/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1025 09:34:07.030314  197465 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21796-2312/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 114.766µs
	I1025 09:34:07.030333  197465 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21796-2312/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1025 09:34:07.030345  197465 cache.go:107] acquiring lock: {Name:mke7d69bcfaca831543e908656ab593a4a40fa81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:34:07.030378  197465 cache.go:115] /home/jenkins/minikube-integration/21796-2312/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1025 09:34:07.030384  197465 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21796-2312/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 40.427µs
	I1025 09:34:07.030390  197465 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21796-2312/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1025 09:34:07.030388  197465 cache.go:107] acquiring lock: {Name:mk126e79aa529b44847e8e0d77047712097e64d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:34:07.030417  197465 cache.go:107] acquiring lock: {Name:mk2de91b05c528d92ddafdacc8d1d796e4e0ed35 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:34:07.030449  197465 cache.go:115] /home/jenkins/minikube-integration/21796-2312/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1025 09:34:07.030455  197465 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21796-2312/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 38.032µs
	I1025 09:34:07.030461  197465 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21796-2312/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1025 09:34:07.030462  197465 cache.go:115] /home/jenkins/minikube-integration/21796-2312/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1025 09:34:07.030471  197465 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21796-2312/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 90.783µs
	I1025 09:34:07.030478  197465 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21796-2312/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1025 09:34:07.030473  197465 cache.go:107] acquiring lock: {Name:mk745bd2aa1a46feb04105abd2f36294bc2faf55 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:34:07.030399  197465 cache.go:107] acquiring lock: {Name:mkac4ea6a29baabc387f5e10fceb0515171b25ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:34:07.030500  197465 cache.go:115] /home/jenkins/minikube-integration/21796-2312/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1025 09:34:07.030506  197465 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21796-2312/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 33.641µs
	I1025 09:34:07.030512  197465 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21796-2312/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1025 09:34:07.030516  197465 cache.go:115] /home/jenkins/minikube-integration/21796-2312/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1025 09:34:07.030523  197465 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21796-2312/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 124.556µs
	I1025 09:34:07.030521  197465 cache.go:107] acquiring lock: {Name:mk1dc2e18ae906ce78182133755d4a361cbb41c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:34:07.030528  197465 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21796-2312/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1025 09:34:07.030547  197465 cache.go:115] /home/jenkins/minikube-integration/21796-2312/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1025 09:34:07.030544  197465 cache.go:107] acquiring lock: {Name:mk9b257ad5a4227487879544432d71435dc64688 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:34:07.030554  197465 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21796-2312/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 34.413µs
	I1025 09:34:07.030560  197465 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21796-2312/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1025 09:34:07.030583  197465 cache.go:115] /home/jenkins/minikube-integration/21796-2312/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1025 09:34:07.030589  197465 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21796-2312/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 46.769µs
	I1025 09:34:07.030595  197465 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21796-2312/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1025 09:34:07.030601  197465 cache.go:87] Successfully saved all images to host disk.
	I1025 09:34:07.049378  197465 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 09:34:07.049403  197465 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 09:34:07.049417  197465 cache.go:232] Successfully downloaded all kic artifacts
	I1025 09:34:07.049443  197465 start.go:360] acquireMachinesLock for no-preload-179869: {Name:mk26cbb974332faf6881a64fefe2338920e51d06 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:34:07.049498  197465 start.go:364] duration metric: took 36.702µs to acquireMachinesLock for "no-preload-179869"
	I1025 09:34:07.049523  197465 start.go:96] Skipping create...Using existing machine configuration
	I1025 09:34:07.049528  197465 fix.go:54] fixHost starting: 
	I1025 09:34:07.049776  197465 cli_runner.go:164] Run: docker container inspect no-preload-179869 --format={{.State.Status}}
	I1025 09:34:07.065739  197465 fix.go:112] recreateIfNeeded on no-preload-179869: state=Stopped err=<nil>
	W1025 09:34:07.065773  197465 fix.go:138] unexpected machine state, will restart: <nil>
	W1025 09:34:07.528762  193057 node_ready.go:57] node "embed-certs-173264" has "Ready":"False" status (will retry)
	I1025 09:34:09.028240  193057 node_ready.go:49] node "embed-certs-173264" is "Ready"
	I1025 09:34:09.028275  193057 node_ready.go:38] duration metric: took 40.503488022s for node "embed-certs-173264" to be "Ready" ...
	I1025 09:34:09.028288  193057 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:34:09.028367  193057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:34:09.040826  193057 api_server.go:72] duration metric: took 42.425563135s to wait for apiserver process to appear ...
	I1025 09:34:09.040849  193057 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:34:09.040868  193057 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:34:09.049181  193057 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1025 09:34:09.050548  193057 api_server.go:141] control plane version: v1.34.1
	I1025 09:34:09.050575  193057 api_server.go:131] duration metric: took 9.719474ms to wait for apiserver health ...
	I1025 09:34:09.050584  193057 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 09:34:09.054105  193057 system_pods.go:59] 8 kube-system pods found
	I1025 09:34:09.054140  193057 system_pods.go:61] "coredns-66bc5c9577-vgz5x" [0f0e1eb2-95c0-4e48-9237-fa235bd6c06d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:34:09.054147  193057 system_pods.go:61] "etcd-embed-certs-173264" [614385a1-5378-4984-b048-8d85c96938f2] Running
	I1025 09:34:09.054154  193057 system_pods.go:61] "kindnet-862lz" [108ce2e5-6770-4794-a1de-503d2a6ea2a9] Running
	I1025 09:34:09.054159  193057 system_pods.go:61] "kube-apiserver-embed-certs-173264" [ff2383a6-cf45-400b-a449-82c480d2345e] Running
	I1025 09:34:09.054164  193057 system_pods.go:61] "kube-controller-manager-embed-certs-173264" [53385786-a0a7-40cc-9e25-ba2224c653bd] Running
	I1025 09:34:09.054169  193057 system_pods.go:61] "kube-proxy-gwv98" [173eff2d-86b5-4951-9928-37409b52fbab] Running
	I1025 09:34:09.054173  193057 system_pods.go:61] "kube-scheduler-embed-certs-173264" [cbf80061-881a-4991-b7d4-f04920872558] Running
	I1025 09:34:09.054193  193057 system_pods.go:61] "storage-provisioner" [21656d87-d41f-4d4c-87aa-5cbf74c12af2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:34:09.054200  193057 system_pods.go:74] duration metric: took 3.60967ms to wait for pod list to return data ...
	I1025 09:34:09.054208  193057 default_sa.go:34] waiting for default service account to be created ...
	I1025 09:34:09.057444  193057 default_sa.go:45] found service account: "default"
	I1025 09:34:09.057467  193057 default_sa.go:55] duration metric: took 3.252316ms for default service account to be created ...
	I1025 09:34:09.057477  193057 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 09:34:09.060228  193057 system_pods.go:86] 8 kube-system pods found
	I1025 09:34:09.060264  193057 system_pods.go:89] "coredns-66bc5c9577-vgz5x" [0f0e1eb2-95c0-4e48-9237-fa235bd6c06d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:34:09.060271  193057 system_pods.go:89] "etcd-embed-certs-173264" [614385a1-5378-4984-b048-8d85c96938f2] Running
	I1025 09:34:09.060279  193057 system_pods.go:89] "kindnet-862lz" [108ce2e5-6770-4794-a1de-503d2a6ea2a9] Running
	I1025 09:34:09.060283  193057 system_pods.go:89] "kube-apiserver-embed-certs-173264" [ff2383a6-cf45-400b-a449-82c480d2345e] Running
	I1025 09:34:09.060288  193057 system_pods.go:89] "kube-controller-manager-embed-certs-173264" [53385786-a0a7-40cc-9e25-ba2224c653bd] Running
	I1025 09:34:09.060292  193057 system_pods.go:89] "kube-proxy-gwv98" [173eff2d-86b5-4951-9928-37409b52fbab] Running
	I1025 09:34:09.060296  193057 system_pods.go:89] "kube-scheduler-embed-certs-173264" [cbf80061-881a-4991-b7d4-f04920872558] Running
	I1025 09:34:09.060302  193057 system_pods.go:89] "storage-provisioner" [21656d87-d41f-4d4c-87aa-5cbf74c12af2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:34:09.060326  193057 retry.go:31] will retry after 261.559605ms: missing components: kube-dns
	I1025 09:34:09.331580  193057 system_pods.go:86] 8 kube-system pods found
	I1025 09:34:09.331672  193057 system_pods.go:89] "coredns-66bc5c9577-vgz5x" [0f0e1eb2-95c0-4e48-9237-fa235bd6c06d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:34:09.331694  193057 system_pods.go:89] "etcd-embed-certs-173264" [614385a1-5378-4984-b048-8d85c96938f2] Running
	I1025 09:34:09.331702  193057 system_pods.go:89] "kindnet-862lz" [108ce2e5-6770-4794-a1de-503d2a6ea2a9] Running
	I1025 09:34:09.331707  193057 system_pods.go:89] "kube-apiserver-embed-certs-173264" [ff2383a6-cf45-400b-a449-82c480d2345e] Running
	I1025 09:34:09.331729  193057 system_pods.go:89] "kube-controller-manager-embed-certs-173264" [53385786-a0a7-40cc-9e25-ba2224c653bd] Running
	I1025 09:34:09.331741  193057 system_pods.go:89] "kube-proxy-gwv98" [173eff2d-86b5-4951-9928-37409b52fbab] Running
	I1025 09:34:09.331746  193057 system_pods.go:89] "kube-scheduler-embed-certs-173264" [cbf80061-881a-4991-b7d4-f04920872558] Running
	I1025 09:34:09.331752  193057 system_pods.go:89] "storage-provisioner" [21656d87-d41f-4d4c-87aa-5cbf74c12af2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:34:09.331771  193057 retry.go:31] will retry after 338.744404ms: missing components: kube-dns
	I1025 09:34:09.674756  193057 system_pods.go:86] 8 kube-system pods found
	I1025 09:34:09.674790  193057 system_pods.go:89] "coredns-66bc5c9577-vgz5x" [0f0e1eb2-95c0-4e48-9237-fa235bd6c06d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:34:09.674797  193057 system_pods.go:89] "etcd-embed-certs-173264" [614385a1-5378-4984-b048-8d85c96938f2] Running
	I1025 09:34:09.674803  193057 system_pods.go:89] "kindnet-862lz" [108ce2e5-6770-4794-a1de-503d2a6ea2a9] Running
	I1025 09:34:09.674836  193057 system_pods.go:89] "kube-apiserver-embed-certs-173264" [ff2383a6-cf45-400b-a449-82c480d2345e] Running
	I1025 09:34:09.674842  193057 system_pods.go:89] "kube-controller-manager-embed-certs-173264" [53385786-a0a7-40cc-9e25-ba2224c653bd] Running
	I1025 09:34:09.674851  193057 system_pods.go:89] "kube-proxy-gwv98" [173eff2d-86b5-4951-9928-37409b52fbab] Running
	I1025 09:34:09.674855  193057 system_pods.go:89] "kube-scheduler-embed-certs-173264" [cbf80061-881a-4991-b7d4-f04920872558] Running
	I1025 09:34:09.674861  193057 system_pods.go:89] "storage-provisioner" [21656d87-d41f-4d4c-87aa-5cbf74c12af2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:34:09.674879  193057 retry.go:31] will retry after 442.251657ms: missing components: kube-dns
	I1025 09:34:10.121309  193057 system_pods.go:86] 8 kube-system pods found
	I1025 09:34:10.121342  193057 system_pods.go:89] "coredns-66bc5c9577-vgz5x" [0f0e1eb2-95c0-4e48-9237-fa235bd6c06d] Running
	I1025 09:34:10.121349  193057 system_pods.go:89] "etcd-embed-certs-173264" [614385a1-5378-4984-b048-8d85c96938f2] Running
	I1025 09:34:10.121353  193057 system_pods.go:89] "kindnet-862lz" [108ce2e5-6770-4794-a1de-503d2a6ea2a9] Running
	I1025 09:34:10.121358  193057 system_pods.go:89] "kube-apiserver-embed-certs-173264" [ff2383a6-cf45-400b-a449-82c480d2345e] Running
	I1025 09:34:10.121364  193057 system_pods.go:89] "kube-controller-manager-embed-certs-173264" [53385786-a0a7-40cc-9e25-ba2224c653bd] Running
	I1025 09:34:10.121369  193057 system_pods.go:89] "kube-proxy-gwv98" [173eff2d-86b5-4951-9928-37409b52fbab] Running
	I1025 09:34:10.121375  193057 system_pods.go:89] "kube-scheduler-embed-certs-173264" [cbf80061-881a-4991-b7d4-f04920872558] Running
	I1025 09:34:10.121379  193057 system_pods.go:89] "storage-provisioner" [21656d87-d41f-4d4c-87aa-5cbf74c12af2] Running
	I1025 09:34:10.121392  193057 system_pods.go:126] duration metric: took 1.063908984s to wait for k8s-apps to be running ...
	I1025 09:34:10.121400  193057 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 09:34:10.121458  193057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:34:10.135134  193057 system_svc.go:56] duration metric: took 13.72343ms WaitForService to wait for kubelet
	I1025 09:34:10.135162  193057 kubeadm.go:586] duration metric: took 43.519903469s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:34:10.135186  193057 node_conditions.go:102] verifying NodePressure condition ...
	I1025 09:34:10.138416  193057 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 09:34:10.138450  193057 node_conditions.go:123] node cpu capacity is 2
	I1025 09:34:10.138465  193057 node_conditions.go:105] duration metric: took 3.273723ms to run NodePressure ...
	I1025 09:34:10.138477  193057 start.go:241] waiting for startup goroutines ...
	I1025 09:34:10.138485  193057 start.go:246] waiting for cluster config update ...
	I1025 09:34:10.138496  193057 start.go:255] writing updated cluster config ...
	I1025 09:34:10.138829  193057 ssh_runner.go:195] Run: rm -f paused
	I1025 09:34:10.142966  193057 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:34:10.148964  193057 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vgz5x" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:34:10.154245  193057 pod_ready.go:94] pod "coredns-66bc5c9577-vgz5x" is "Ready"
	I1025 09:34:10.154272  193057 pod_ready.go:86] duration metric: took 5.278884ms for pod "coredns-66bc5c9577-vgz5x" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:34:10.156429  193057 pod_ready.go:83] waiting for pod "etcd-embed-certs-173264" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:34:10.161477  193057 pod_ready.go:94] pod "etcd-embed-certs-173264" is "Ready"
	I1025 09:34:10.161503  193057 pod_ready.go:86] duration metric: took 5.044634ms for pod "etcd-embed-certs-173264" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:34:10.164078  193057 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-173264" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:34:10.169142  193057 pod_ready.go:94] pod "kube-apiserver-embed-certs-173264" is "Ready"
	I1025 09:34:10.169169  193057 pod_ready.go:86] duration metric: took 5.066041ms for pod "kube-apiserver-embed-certs-173264" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:34:10.171488  193057 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-173264" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:34:10.547673  193057 pod_ready.go:94] pod "kube-controller-manager-embed-certs-173264" is "Ready"
	I1025 09:34:10.547704  193057 pod_ready.go:86] duration metric: took 376.185029ms for pod "kube-controller-manager-embed-certs-173264" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:34:10.751276  193057 pod_ready.go:83] waiting for pod "kube-proxy-gwv98" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:34:11.147119  193057 pod_ready.go:94] pod "kube-proxy-gwv98" is "Ready"
	I1025 09:34:11.147148  193057 pod_ready.go:86] duration metric: took 395.842556ms for pod "kube-proxy-gwv98" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:34:11.348349  193057 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-173264" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:34:11.747052  193057 pod_ready.go:94] pod "kube-scheduler-embed-certs-173264" is "Ready"
	I1025 09:34:11.747078  193057 pod_ready.go:86] duration metric: took 398.706032ms for pod "kube-scheduler-embed-certs-173264" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:34:11.747090  193057 pod_ready.go:40] duration metric: took 1.604091558s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:34:11.832527  193057 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1025 09:34:11.835790  193057 out.go:179] * Done! kubectl is now configured to use "embed-certs-173264" cluster and "default" namespace by default
	I1025 09:34:07.069120  197465 out.go:252] * Restarting existing docker container for "no-preload-179869" ...
	I1025 09:34:07.069208  197465 cli_runner.go:164] Run: docker start no-preload-179869
	I1025 09:34:07.325558  197465 cli_runner.go:164] Run: docker container inspect no-preload-179869 --format={{.State.Status}}
	I1025 09:34:07.348477  197465 kic.go:430] container "no-preload-179869" state is running.
	I1025 09:34:07.348871  197465 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-179869
	I1025 09:34:07.372486  197465 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/no-preload-179869/config.json ...
	I1025 09:34:07.372721  197465 machine.go:93] provisionDockerMachine start ...
	I1025 09:34:07.372840  197465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179869
	I1025 09:34:07.397500  197465 main.go:141] libmachine: Using SSH client type: native
	I1025 09:34:07.397826  197465 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1025 09:34:07.397841  197465 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:34:07.398486  197465 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58920->127.0.0.1:33068: read: connection reset by peer
	I1025 09:34:10.551322  197465 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-179869
	
	I1025 09:34:10.551356  197465 ubuntu.go:182] provisioning hostname "no-preload-179869"
	I1025 09:34:10.551426  197465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179869
	I1025 09:34:10.570144  197465 main.go:141] libmachine: Using SSH client type: native
	I1025 09:34:10.570453  197465 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1025 09:34:10.570471  197465 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-179869 && echo "no-preload-179869" | sudo tee /etc/hostname
	I1025 09:34:10.732376  197465 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-179869
	
	I1025 09:34:10.732583  197465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179869
	I1025 09:34:10.752990  197465 main.go:141] libmachine: Using SSH client type: native
	I1025 09:34:10.753289  197465 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1025 09:34:10.753306  197465 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-179869' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-179869/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-179869' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:34:10.914194  197465 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:34:10.914218  197465 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21796-2312/.minikube CaCertPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21796-2312/.minikube}
	I1025 09:34:10.914246  197465 ubuntu.go:190] setting up certificates
	I1025 09:34:10.914257  197465 provision.go:84] configureAuth start
	I1025 09:34:10.914317  197465 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-179869
	I1025 09:34:10.931612  197465 provision.go:143] copyHostCerts
	I1025 09:34:10.931681  197465 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-2312/.minikube/cert.pem, removing ...
	I1025 09:34:10.931703  197465 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-2312/.minikube/cert.pem
	I1025 09:34:10.931782  197465 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21796-2312/.minikube/cert.pem (1123 bytes)
	I1025 09:34:10.931892  197465 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-2312/.minikube/key.pem, removing ...
	I1025 09:34:10.931957  197465 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-2312/.minikube/key.pem
	I1025 09:34:10.932012  197465 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21796-2312/.minikube/key.pem (1679 bytes)
	I1025 09:34:10.932086  197465 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-2312/.minikube/ca.pem, removing ...
	I1025 09:34:10.932097  197465 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-2312/.minikube/ca.pem
	I1025 09:34:10.932130  197465 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21796-2312/.minikube/ca.pem (1082 bytes)
	I1025 09:34:10.932218  197465 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21796-2312/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca-key.pem org=jenkins.no-preload-179869 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-179869]
	I1025 09:34:11.738931  197465 provision.go:177] copyRemoteCerts
	I1025 09:34:11.739033  197465 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:34:11.739153  197465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179869
	I1025 09:34:11.760578  197465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/no-preload-179869/id_rsa Username:docker}
	I1025 09:34:11.885483  197465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 09:34:11.915658  197465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1025 09:34:11.944625  197465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 09:34:11.973622  197465 provision.go:87] duration metric: took 1.059351955s to configureAuth
	I1025 09:34:11.973660  197465 ubuntu.go:206] setting minikube options for container-runtime
	I1025 09:34:11.973873  197465 config.go:182] Loaded profile config "no-preload-179869": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:34:11.974090  197465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179869
	I1025 09:34:12.012465  197465 main.go:141] libmachine: Using SSH client type: native
	I1025 09:34:12.012985  197465 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1025 09:34:12.013013  197465 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 09:34:12.380900  197465 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:34:12.380928  197465 machine.go:96] duration metric: took 5.008188616s to provisionDockerMachine
	I1025 09:34:12.380939  197465 start.go:293] postStartSetup for "no-preload-179869" (driver="docker")
	I1025 09:34:12.380949  197465 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:34:12.381023  197465 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:34:12.381088  197465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179869
	I1025 09:34:12.402168  197465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/no-preload-179869/id_rsa Username:docker}
	I1025 09:34:12.519174  197465 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:34:12.523021  197465 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 09:34:12.523048  197465 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 09:34:12.523058  197465 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-2312/.minikube/addons for local assets ...
	I1025 09:34:12.523111  197465 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-2312/.minikube/files for local assets ...
	I1025 09:34:12.523200  197465 filesync.go:149] local asset: /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem -> 41102.pem in /etc/ssl/certs
	I1025 09:34:12.523304  197465 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 09:34:12.532056  197465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem --> /etc/ssl/certs/41102.pem (1708 bytes)
	I1025 09:34:12.551957  197465 start.go:296] duration metric: took 171.003682ms for postStartSetup
	I1025 09:34:12.552036  197465 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:34:12.552097  197465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179869
	I1025 09:34:12.583462  197465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/no-preload-179869/id_rsa Username:docker}
	I1025 09:34:12.687058  197465 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 09:34:12.694034  197465 fix.go:56] duration metric: took 5.644499652s for fixHost
	I1025 09:34:12.694060  197465 start.go:83] releasing machines lock for "no-preload-179869", held for 5.644548178s
	I1025 09:34:12.694149  197465 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-179869
	I1025 09:34:12.711009  197465 ssh_runner.go:195] Run: cat /version.json
	I1025 09:34:12.711066  197465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179869
	I1025 09:34:12.711342  197465 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:34:12.711396  197465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179869
	I1025 09:34:12.732701  197465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/no-preload-179869/id_rsa Username:docker}
	I1025 09:34:12.734425  197465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/no-preload-179869/id_rsa Username:docker}
	I1025 09:34:12.947136  197465 ssh_runner.go:195] Run: systemctl --version
	I1025 09:34:12.953583  197465 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:34:12.990550  197465 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:34:12.995392  197465 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:34:12.995473  197465 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:34:13.006528  197465 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 09:34:13.006569  197465 start.go:495] detecting cgroup driver to use...
	I1025 09:34:13.006632  197465 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 09:34:13.006705  197465 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:34:13.023114  197465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:34:13.036104  197465 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:34:13.036227  197465 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:34:13.051786  197465 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:34:13.065512  197465 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:34:13.195906  197465 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:34:13.322379  197465 docker.go:234] disabling docker service ...
	I1025 09:34:13.322491  197465 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:34:13.338781  197465 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:34:13.352962  197465 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:34:13.480678  197465 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:34:13.612662  197465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:34:13.626898  197465 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:34:13.642512  197465 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 09:34:13.642588  197465 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:34:13.652604  197465 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 09:34:13.652672  197465 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:34:13.662679  197465 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:34:13.672050  197465 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:34:13.681464  197465 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:34:13.690460  197465 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:34:13.699124  197465 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:34:13.707800  197465 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:34:13.717057  197465 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:34:13.725439  197465 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:34:13.732745  197465 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:34:13.855257  197465 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:34:13.994733  197465 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:34:13.994811  197465 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:34:13.999006  197465 start.go:563] Will wait 60s for crictl version
	I1025 09:34:13.999081  197465 ssh_runner.go:195] Run: which crictl
	I1025 09:34:14.021064  197465 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 09:34:14.047214  197465 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 09:34:14.047309  197465 ssh_runner.go:195] Run: crio --version
	I1025 09:34:14.077284  197465 ssh_runner.go:195] Run: crio --version
	I1025 09:34:14.110796  197465 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 09:34:14.113799  197465 cli_runner.go:164] Run: docker network inspect no-preload-179869 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:34:14.130500  197465 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1025 09:34:14.134500  197465 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:34:14.144550  197465 kubeadm.go:883] updating cluster {Name:no-preload-179869 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-179869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 09:34:14.144681  197465 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:34:14.144722  197465 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:34:14.181189  197465 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:34:14.181215  197465 cache_images.go:85] Images are preloaded, skipping loading
	I1025 09:34:14.181222  197465 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1025 09:34:14.181306  197465 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-179869 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-179869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 09:34:14.181393  197465 ssh_runner.go:195] Run: crio config
	I1025 09:34:14.240327  197465 cni.go:84] Creating CNI manager for ""
	I1025 09:34:14.240351  197465 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:34:14.240370  197465 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 09:34:14.240394  197465 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-179869 NodeName:no-preload-179869 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:34:14.240518  197465 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-179869"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:34:14.240601  197465 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 09:34:14.249182  197465 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 09:34:14.249298  197465 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:34:14.256892  197465 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1025 09:34:14.269372  197465 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:34:14.281476  197465 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1025 09:34:14.294755  197465 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1025 09:34:14.299199  197465 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:34:14.310085  197465 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:34:14.467945  197465 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:34:14.491389  197465 certs.go:69] Setting up /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/no-preload-179869 for IP: 192.168.76.2
	I1025 09:34:14.491457  197465 certs.go:195] generating shared ca certs ...
	I1025 09:34:14.491487  197465 certs.go:227] acquiring lock for ca certs: {Name:mke81a91ad41248e67f7acba9fb2e71e1c110e7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:34:14.491661  197465 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21796-2312/.minikube/ca.key
	I1025 09:34:14.491783  197465 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.key
	I1025 09:34:14.491813  197465 certs.go:257] generating profile certs ...
	I1025 09:34:14.491937  197465 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/no-preload-179869/client.key
	I1025 09:34:14.492055  197465 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/no-preload-179869/apiserver.key.b8f832fb
	I1025 09:34:14.492123  197465 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/no-preload-179869/proxy-client.key
	I1025 09:34:14.492269  197465 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/4110.pem (1338 bytes)
	W1025 09:34:14.492321  197465 certs.go:480] ignoring /home/jenkins/minikube-integration/21796-2312/.minikube/certs/4110_empty.pem, impossibly tiny 0 bytes
	I1025 09:34:14.492345  197465 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 09:34:14.492411  197465 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem (1082 bytes)
	I1025 09:34:14.492457  197465 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:34:14.492514  197465 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/key.pem (1679 bytes)
	I1025 09:34:14.492593  197465 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem (1708 bytes)
	I1025 09:34:14.493225  197465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:34:14.525179  197465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 09:34:14.606926  197465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:34:14.644765  197465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 09:34:14.667425  197465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/no-preload-179869/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1025 09:34:14.692776  197465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/no-preload-179869/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 09:34:14.725035  197465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/no-preload-179869/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:34:14.766809  197465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/no-preload-179869/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 09:34:14.790902  197465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem --> /usr/share/ca-certificates/41102.pem (1708 bytes)
	I1025 09:34:14.815023  197465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:34:14.835017  197465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/certs/4110.pem --> /usr/share/ca-certificates/4110.pem (1338 bytes)
	I1025 09:34:14.853914  197465 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:34:14.870408  197465 ssh_runner.go:195] Run: openssl version
	I1025 09:34:14.879777  197465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41102.pem && ln -fs /usr/share/ca-certificates/41102.pem /etc/ssl/certs/41102.pem"
	I1025 09:34:14.889294  197465 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41102.pem
	I1025 09:34:14.893525  197465 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 08:37 /usr/share/ca-certificates/41102.pem
	I1025 09:34:14.893631  197465 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41102.pem
	I1025 09:34:14.940875  197465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41102.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 09:34:14.949185  197465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:34:14.957752  197465 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:34:14.961685  197465 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:34:14.961750  197465 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:34:15.011519  197465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:34:15.037086  197465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4110.pem && ln -fs /usr/share/ca-certificates/4110.pem /etc/ssl/certs/4110.pem"
	I1025 09:34:15.053142  197465 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4110.pem
	I1025 09:34:15.058501  197465 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 08:37 /usr/share/ca-certificates/4110.pem
	I1025 09:34:15.058644  197465 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4110.pem
	I1025 09:34:15.105650  197465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4110.pem /etc/ssl/certs/51391683.0"
	I1025 09:34:15.115563  197465 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:34:15.119695  197465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 09:34:15.161771  197465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 09:34:15.203630  197465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 09:34:15.246790  197465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 09:34:15.294110  197465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 09:34:15.349581  197465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 09:34:15.453665  197465 kubeadm.go:400] StartCluster: {Name:no-preload-179869 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-179869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:34:15.453758  197465 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:34:15.453830  197465 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:34:15.520232  197465 cri.go:89] found id: "f5bada5085d9fc38a05b5c28df63b45e3d7d2804b79eb6e9472ffcfe51192fcf"
	I1025 09:34:15.520254  197465 cri.go:89] found id: "322d6c0123c7e7fdc2849fe7f0af01136e262450452ab009ce0b5204f1aa3c61"
	I1025 09:34:15.520259  197465 cri.go:89] found id: "e7e2e32307ca96424361bfd29933a486f201599a8ece9c4103b9b800c7dc2e1e"
	I1025 09:34:15.520263  197465 cri.go:89] found id: "a6cb8feabf010cd74e1dfdfebd8f0990900f05746d51f737f324f6b0f0b15aee"
	I1025 09:34:15.520269  197465 cri.go:89] found id: ""
	I1025 09:34:15.520319  197465 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 09:34:15.541582  197465 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:34:15Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:34:15.541674  197465 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 09:34:15.552345  197465 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 09:34:15.552364  197465 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 09:34:15.552418  197465 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 09:34:15.564619  197465 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 09:34:15.565476  197465 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-179869" does not appear in /home/jenkins/minikube-integration/21796-2312/kubeconfig
	I1025 09:34:15.566088  197465 kubeconfig.go:62] /home/jenkins/minikube-integration/21796-2312/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-179869" cluster setting kubeconfig missing "no-preload-179869" context setting]
	I1025 09:34:15.566873  197465 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/kubeconfig: {Name:mk29749a0edbb941c188588ea6aef25a517ab571 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:34:15.570018  197465 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 09:34:15.579482  197465 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1025 09:34:15.579517  197465 kubeadm.go:601] duration metric: took 27.146022ms to restartPrimaryControlPlane
	I1025 09:34:15.579527  197465 kubeadm.go:402] duration metric: took 125.872707ms to StartCluster
	I1025 09:34:15.579541  197465 settings.go:142] acquiring lock: {Name:mk0d8a31751f5e359928da7d7271367bd4b397fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:34:15.579602  197465 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21796-2312/kubeconfig
	I1025 09:34:15.581124  197465 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/kubeconfig: {Name:mk29749a0edbb941c188588ea6aef25a517ab571 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:34:15.581370  197465 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:34:15.581825  197465 config.go:182] Loaded profile config "no-preload-179869": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:34:15.581792  197465 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 09:34:15.581871  197465 addons.go:69] Setting storage-provisioner=true in profile "no-preload-179869"
	I1025 09:34:15.581878  197465 addons.go:69] Setting dashboard=true in profile "no-preload-179869"
	I1025 09:34:15.581888  197465 addons.go:238] Setting addon storage-provisioner=true in "no-preload-179869"
	W1025 09:34:15.581896  197465 addons.go:247] addon storage-provisioner should already be in state true
	I1025 09:34:15.581897  197465 addons.go:69] Setting default-storageclass=true in profile "no-preload-179869"
	I1025 09:34:15.581908  197465 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-179869"
	I1025 09:34:15.581921  197465 host.go:66] Checking if "no-preload-179869" exists ...
	I1025 09:34:15.582326  197465 cli_runner.go:164] Run: docker container inspect no-preload-179869 --format={{.State.Status}}
	I1025 09:34:15.582505  197465 cli_runner.go:164] Run: docker container inspect no-preload-179869 --format={{.State.Status}}
	I1025 09:34:15.585263  197465 out.go:179] * Verifying Kubernetes components...
	I1025 09:34:15.581889  197465 addons.go:238] Setting addon dashboard=true in "no-preload-179869"
	W1025 09:34:15.585412  197465 addons.go:247] addon dashboard should already be in state true
	I1025 09:34:15.585467  197465 host.go:66] Checking if "no-preload-179869" exists ...
	I1025 09:34:15.586005  197465 cli_runner.go:164] Run: docker container inspect no-preload-179869 --format={{.State.Status}}
	I1025 09:34:15.590078  197465 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:34:15.641598  197465 addons.go:238] Setting addon default-storageclass=true in "no-preload-179869"
	W1025 09:34:15.641621  197465 addons.go:247] addon default-storageclass should already be in state true
	I1025 09:34:15.641660  197465 host.go:66] Checking if "no-preload-179869" exists ...
	I1025 09:34:15.644686  197465 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 09:34:15.644833  197465 cli_runner.go:164] Run: docker container inspect no-preload-179869 --format={{.State.Status}}
	I1025 09:34:15.647684  197465 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:34:15.647710  197465 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 09:34:15.647788  197465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179869
	I1025 09:34:15.658829  197465 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1025 09:34:15.665422  197465 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1025 09:34:15.668332  197465 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1025 09:34:15.668385  197465 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1025 09:34:15.668453  197465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179869
	I1025 09:34:15.700832  197465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/no-preload-179869/id_rsa Username:docker}
	I1025 09:34:15.706742  197465 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 09:34:15.706770  197465 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 09:34:15.706836  197465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179869
	I1025 09:34:15.732061  197465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/no-preload-179869/id_rsa Username:docker}
	I1025 09:34:15.746264  197465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/no-preload-179869/id_rsa Username:docker}
	I1025 09:34:15.937586  197465 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:34:15.973331  197465 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:34:15.982884  197465 node_ready.go:35] waiting up to 6m0s for node "no-preload-179869" to be "Ready" ...
	I1025 09:34:16.024263  197465 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1025 09:34:16.024342  197465 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1025 09:34:16.088018  197465 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1025 09:34:16.088087  197465 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1025 09:34:16.091067  197465 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 09:34:16.133564  197465 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1025 09:34:16.133585  197465 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1025 09:34:16.159747  197465 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1025 09:34:16.159773  197465 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1025 09:34:16.255425  197465 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1025 09:34:16.255445  197465 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1025 09:34:16.280183  197465 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1025 09:34:16.280249  197465 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1025 09:34:16.331785  197465 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1025 09:34:16.331853  197465 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1025 09:34:16.356558  197465 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1025 09:34:16.356623  197465 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1025 09:34:16.382305  197465 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 09:34:16.382377  197465 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1025 09:34:16.403390  197465 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	
	
	==> CRI-O <==
	Oct 25 09:34:09 embed-certs-173264 crio[839]: time="2025-10-25T09:34:09.371245809Z" level=info msg="Created container 2de2d639cb8e7482852ce89482aa7e28d9d3b1b8773bb1b625d04d8ee5e872fc: kube-system/coredns-66bc5c9577-vgz5x/coredns" id=2203ee5a-6710-4b45-9526-86bc3f56871e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:34:09 embed-certs-173264 crio[839]: time="2025-10-25T09:34:09.372133262Z" level=info msg="Starting container: 2de2d639cb8e7482852ce89482aa7e28d9d3b1b8773bb1b625d04d8ee5e872fc" id=83314124-fd52-48d7-95f6-7cff7a5e6184 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:34:09 embed-certs-173264 crio[839]: time="2025-10-25T09:34:09.374896866Z" level=info msg="Started container" PID=1726 containerID=2de2d639cb8e7482852ce89482aa7e28d9d3b1b8773bb1b625d04d8ee5e872fc description=kube-system/coredns-66bc5c9577-vgz5x/coredns id=83314124-fd52-48d7-95f6-7cff7a5e6184 name=/runtime.v1.RuntimeService/StartContainer sandboxID=50067e2de354e6e0e18f54103109f0c26e71dc01c86f5b4e5fa2d0ef34a48b7a
	Oct 25 09:34:12 embed-certs-173264 crio[839]: time="2025-10-25T09:34:12.410329455Z" level=info msg="Running pod sandbox: default/busybox/POD" id=f17c7c73-ccc8-451f-a318-1ad0ecd3f28e name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:34:12 embed-certs-173264 crio[839]: time="2025-10-25T09:34:12.410404336Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:34:12 embed-certs-173264 crio[839]: time="2025-10-25T09:34:12.423200419Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:aebd4689c5eb09256d9f6e670110fcb25d0bea9e50f86eaca823eb06fa2fe5f6 UID:bc9e47c5-4d6f-402b-b0c1-7ebe8a846159 NetNS:/var/run/netns/fb536383-3684-4668-8f47-bd687285651f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000496ad8}] Aliases:map[]}"
	Oct 25 09:34:12 embed-certs-173264 crio[839]: time="2025-10-25T09:34:12.423384471Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 25 09:34:12 embed-certs-173264 crio[839]: time="2025-10-25T09:34:12.437106186Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:aebd4689c5eb09256d9f6e670110fcb25d0bea9e50f86eaca823eb06fa2fe5f6 UID:bc9e47c5-4d6f-402b-b0c1-7ebe8a846159 NetNS:/var/run/netns/fb536383-3684-4668-8f47-bd687285651f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000496ad8}] Aliases:map[]}"
	Oct 25 09:34:12 embed-certs-173264 crio[839]: time="2025-10-25T09:34:12.437433689Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 25 09:34:12 embed-certs-173264 crio[839]: time="2025-10-25T09:34:12.443816727Z" level=info msg="Ran pod sandbox aebd4689c5eb09256d9f6e670110fcb25d0bea9e50f86eaca823eb06fa2fe5f6 with infra container: default/busybox/POD" id=f17c7c73-ccc8-451f-a318-1ad0ecd3f28e name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:34:12 embed-certs-173264 crio[839]: time="2025-10-25T09:34:12.445403685Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d15b0e44-bb3c-4e1d-a74b-261d0573f228 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:34:12 embed-certs-173264 crio[839]: time="2025-10-25T09:34:12.445714655Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=d15b0e44-bb3c-4e1d-a74b-261d0573f228 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:34:12 embed-certs-173264 crio[839]: time="2025-10-25T09:34:12.445926177Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=d15b0e44-bb3c-4e1d-a74b-261d0573f228 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:34:12 embed-certs-173264 crio[839]: time="2025-10-25T09:34:12.447428153Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=82aef59d-b4de-44ec-ae40-98ac8c0faf83 name=/runtime.v1.ImageService/PullImage
	Oct 25 09:34:12 embed-certs-173264 crio[839]: time="2025-10-25T09:34:12.451565181Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 25 09:34:14 embed-certs-173264 crio[839]: time="2025-10-25T09:34:14.711967883Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=82aef59d-b4de-44ec-ae40-98ac8c0faf83 name=/runtime.v1.ImageService/PullImage
	Oct 25 09:34:14 embed-certs-173264 crio[839]: time="2025-10-25T09:34:14.712923555Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e8478ec6-6ead-4ffe-8e34-814e9826f1ef name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:34:14 embed-certs-173264 crio[839]: time="2025-10-25T09:34:14.715745137Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0816e37c-2f52-4730-ae59-58d320509774 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:34:14 embed-certs-173264 crio[839]: time="2025-10-25T09:34:14.72432689Z" level=info msg="Creating container: default/busybox/busybox" id=1e223619-9d45-4d8d-be94-16c8d6cf172b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:34:14 embed-certs-173264 crio[839]: time="2025-10-25T09:34:14.724455605Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:34:14 embed-certs-173264 crio[839]: time="2025-10-25T09:34:14.731221729Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:34:14 embed-certs-173264 crio[839]: time="2025-10-25T09:34:14.731915662Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:34:14 embed-certs-173264 crio[839]: time="2025-10-25T09:34:14.750563682Z" level=info msg="Created container 4c677ecab7e45558e5b0135d5b7ef0afbd89431ef781525d061de8a377a6fa38: default/busybox/busybox" id=1e223619-9d45-4d8d-be94-16c8d6cf172b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:34:14 embed-certs-173264 crio[839]: time="2025-10-25T09:34:14.75342428Z" level=info msg="Starting container: 4c677ecab7e45558e5b0135d5b7ef0afbd89431ef781525d061de8a377a6fa38" id=2eaecc88-e001-4d6d-8c3d-c20777a0b95c name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:34:14 embed-certs-173264 crio[839]: time="2025-10-25T09:34:14.758358307Z" level=info msg="Started container" PID=1784 containerID=4c677ecab7e45558e5b0135d5b7ef0afbd89431ef781525d061de8a377a6fa38 description=default/busybox/busybox id=2eaecc88-e001-4d6d-8c3d-c20777a0b95c name=/runtime.v1.RuntimeService/StartContainer sandboxID=aebd4689c5eb09256d9f6e670110fcb25d0bea9e50f86eaca823eb06fa2fe5f6
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	4c677ecab7e45       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago        Running             busybox                   0                   aebd4689c5eb0       busybox                                      default
	2de2d639cb8e7       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      12 seconds ago       Running             coredns                   0                   50067e2de354e       coredns-66bc5c9577-vgz5x                     kube-system
	9c07e0d1e1f54       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      12 seconds ago       Running             storage-provisioner       0                   5d5d5e7b38330       storage-provisioner                          kube-system
	5462e342924f1       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      53 seconds ago       Running             kindnet-cni               0                   e769f76bb96d1       kindnet-862lz                                kube-system
	0593288f640d1       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      54 seconds ago       Running             kube-proxy                0                   0c30c31bc4333       kube-proxy-gwv98                             kube-system
	f66ccb11e9f69       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   7214c6c071d62       kube-apiserver-embed-certs-173264            kube-system
	88750d9762c09       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   3c2b993ea118e       etcd-embed-certs-173264                      kube-system
	20b325599bc16       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   1b4564e8cc083       kube-controller-manager-embed-certs-173264   kube-system
	4f5307c53b26e       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   4417e74577e3e       kube-scheduler-embed-certs-173264            kube-system
	
	
	==> coredns [2de2d639cb8e7482852ce89482aa7e28d9d3b1b8773bb1b625d04d8ee5e872fc] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58127 - 51849 "HINFO IN 8066840558295039347.970855634594065684. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.024877499s
	
	
	==> describe nodes <==
	Name:               embed-certs-173264
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-173264
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3de5ce1075e776fa55a26fe8396669cc53a4373
	                    minikube.k8s.io/name=embed-certs-173264
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_33_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:33:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-173264
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:34:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:34:08 +0000   Sat, 25 Oct 2025 09:33:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:34:08 +0000   Sat, 25 Oct 2025 09:33:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:34:08 +0000   Sat, 25 Oct 2025 09:33:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:34:08 +0000   Sat, 25 Oct 2025 09:34:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-173264
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                fd815287-48cc-43e1-a791-5bcdc882763d
	  Boot ID:                    89aa6167-e700-462e-8969-7739a9f33dc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-vgz5x                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     55s
	  kube-system                 etcd-embed-certs-173264                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         61s
	  kube-system                 kindnet-862lz                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      55s
	  kube-system                 kube-apiserver-embed-certs-173264             250m (12%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-controller-manager-embed-certs-173264    200m (10%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-proxy-gwv98                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 kube-scheduler-embed-certs-173264             100m (5%)     0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 53s                kube-proxy       
	  Normal   NodeHasSufficientMemory  70s (x8 over 70s)  kubelet          Node embed-certs-173264 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    70s (x8 over 70s)  kubelet          Node embed-certs-173264 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     70s (x8 over 70s)  kubelet          Node embed-certs-173264 status is now: NodeHasSufficientPID
	  Normal   Starting                 60s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 60s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  60s                kubelet          Node embed-certs-173264 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s                kubelet          Node embed-certs-173264 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s                kubelet          Node embed-certs-173264 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           56s                node-controller  Node embed-certs-173264 event: Registered Node embed-certs-173264 in Controller
	  Normal   NodeReady                14s                kubelet          Node embed-certs-173264 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct25 09:10] overlayfs: idmapped layers are currently not supported
	[Oct25 09:11] overlayfs: idmapped layers are currently not supported
	[Oct25 09:13] overlayfs: idmapped layers are currently not supported
	[ +18.632418] overlayfs: idmapped layers are currently not supported
	[ +13.271347] overlayfs: idmapped layers are currently not supported
	[Oct25 09:14] overlayfs: idmapped layers are currently not supported
	[Oct25 09:15] overlayfs: idmapped layers are currently not supported
	[ +40.253798] overlayfs: idmapped layers are currently not supported
	[Oct25 09:16] overlayfs: idmapped layers are currently not supported
	[Oct25 09:17] overlayfs: idmapped layers are currently not supported
	[Oct25 09:18] overlayfs: idmapped layers are currently not supported
	[  +9.191019] hrtimer: interrupt took 13462946 ns
	[Oct25 09:20] overlayfs: idmapped layers are currently not supported
	[Oct25 09:22] overlayfs: idmapped layers are currently not supported
	[Oct25 09:23] overlayfs: idmapped layers are currently not supported
	[Oct25 09:24] overlayfs: idmapped layers are currently not supported
	[Oct25 09:28] overlayfs: idmapped layers are currently not supported
	[ +37.283444] overlayfs: idmapped layers are currently not supported
	[Oct25 09:29] overlayfs: idmapped layers are currently not supported
	[ +38.328802] overlayfs: idmapped layers are currently not supported
	[Oct25 09:30] overlayfs: idmapped layers are currently not supported
	[Oct25 09:31] overlayfs: idmapped layers are currently not supported
	[Oct25 09:32] overlayfs: idmapped layers are currently not supported
	[Oct25 09:33] overlayfs: idmapped layers are currently not supported
	[Oct25 09:34] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [88750d9762c098ab6bef169baa7b49a398594f3e1d51758eee19214494ffdd0c] <==
	{"level":"warn","ts":"2025-10-25T09:33:17.280040Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:17.316987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:17.337934Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:17.368442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:17.395138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:17.450699Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:17.475653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:17.522567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:17.526781Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:17.558638Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:17.584555Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:17.604152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:17.623226Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:17.644794Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:17.664677Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:17.697524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:17.724388Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:17.755669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:17.797400Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:17.834180Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:17.866272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:17.933150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:17.994352Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:17.997165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:18.102952Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46922","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:34:22 up  1:16,  0 user,  load average: 4.66, 3.65, 2.87
	Linux embed-certs-173264 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5462e342924f11b4b021e8a5ba24f703964a28078a7b84a4bada0aa809093491] <==
	I1025 09:33:28.545298       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 09:33:28.547196       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1025 09:33:28.547393       1 main.go:148] setting mtu 1500 for CNI 
	I1025 09:33:28.547446       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 09:33:28.547480       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T09:33:28Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 09:33:28.815740       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 09:33:28.815817       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 09:33:28.815849       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 09:33:28.816633       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1025 09:33:58.816339       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1025 09:33:58.816339       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1025 09:33:58.816459       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1025 09:33:58.816609       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1025 09:34:00.316068       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 09:34:00.316209       1 metrics.go:72] Registering metrics
	I1025 09:34:00.316345       1 controller.go:711] "Syncing nftables rules"
	I1025 09:34:08.823057       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 09:34:08.823115       1 main.go:301] handling current node
	I1025 09:34:18.818046       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 09:34:18.818121       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f66ccb11e9f69a020af58bf5194bf7075f6ca973a1bd9c7977dffe08cfda71fd] <==
	I1025 09:33:19.455945       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 09:33:19.464128       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:33:19.464278       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	E1025 09:33:19.479983       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1025 09:33:19.496290       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:33:19.499513       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1025 09:33:19.700576       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 09:33:19.961480       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1025 09:33:19.967193       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1025 09:33:19.967218       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 09:33:20.881866       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 09:33:20.940680       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 09:33:21.059355       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1025 09:33:21.068023       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1025 09:33:21.069273       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 09:33:21.078440       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 09:33:21.257070       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 09:33:22.303484       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 09:33:22.337257       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1025 09:33:22.358909       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1025 09:33:26.779368       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:33:26.828090       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:33:27.095597       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 09:33:27.179485       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1025 09:34:20.270524       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:52664: use of closed network connection
	
	
	==> kube-controller-manager [20b325599bc160d35a7f0c86a14e491ad6445927e30bab2c89e2f304dc2242be] <==
	I1025 09:33:26.399669       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1025 09:33:26.402818       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1025 09:33:26.405058       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1025 09:33:26.405200       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1025 09:33:26.405879       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 09:33:26.405933       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 09:33:26.408425       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1025 09:33:26.408502       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1025 09:33:26.408575       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-173264"
	I1025 09:33:26.408642       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1025 09:33:26.409165       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1025 09:33:26.413413       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1025 09:33:26.415977       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1025 09:33:26.417697       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1025 09:33:26.439782       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-173264" podCIDRs=["10.244.0.0/24"]
	I1025 09:33:26.439866       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1025 09:33:26.464154       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1025 09:33:26.464271       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:33:26.464448       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1025 09:33:26.464788       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1025 09:33:26.493241       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:33:26.506322       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:33:26.506346       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 09:33:26.506354       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 09:34:11.415218       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [0593288f640d1d8d9348afd8dd2bee26e010c21367a1b638427a94f30b7223b7] <==
	I1025 09:33:28.323680       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:33:28.456457       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:33:28.657038       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:33:28.663573       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1025 09:33:28.666152       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:33:28.859699       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:33:28.859827       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:33:28.867347       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:33:28.867750       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:33:28.868032       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:33:28.869761       1 config.go:200] "Starting service config controller"
	I1025 09:33:28.869836       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:33:28.869858       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:33:28.869863       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:33:28.869875       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:33:28.869879       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:33:28.876615       1 config.go:309] "Starting node config controller"
	I1025 09:33:28.877235       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:33:28.877285       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:33:28.971195       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 09:33:28.971327       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 09:33:28.971351       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [4f5307c53b26e44fd68e7653474e3160ebf9b4a66c77fa12d9b8763c2f363876] <==
	E1025 09:33:19.334874       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 09:33:19.334938       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 09:33:19.335014       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 09:33:19.335128       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 09:33:19.335171       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 09:33:19.335207       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 09:33:19.335243       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 09:33:19.336624       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 09:33:19.339963       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1025 09:33:19.340105       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 09:33:19.340157       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 09:33:19.340207       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1025 09:33:19.340265       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 09:33:19.340287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 09:33:20.226287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 09:33:20.293585       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 09:33:20.319295       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 09:33:20.323989       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 09:33:20.448399       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 09:33:20.486098       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 09:33:20.502233       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 09:33:20.542291       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1025 09:33:20.562642       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 09:33:20.580424       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1025 09:33:23.320457       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 09:33:26 embed-certs-173264 kubelet[1310]: I1025 09:33:26.486547    1310 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 25 09:33:27 embed-certs-173264 kubelet[1310]: I1025 09:33:27.441377    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2t9p\" (UniqueName: \"kubernetes.io/projected/108ce2e5-6770-4794-a1de-503d2a6ea2a9-kube-api-access-x2t9p\") pod \"kindnet-862lz\" (UID: \"108ce2e5-6770-4794-a1de-503d2a6ea2a9\") " pod="kube-system/kindnet-862lz"
	Oct 25 09:33:27 embed-certs-173264 kubelet[1310]: I1025 09:33:27.441420    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/173eff2d-86b5-4951-9928-37409b52fbab-xtables-lock\") pod \"kube-proxy-gwv98\" (UID: \"173eff2d-86b5-4951-9928-37409b52fbab\") " pod="kube-system/kube-proxy-gwv98"
	Oct 25 09:33:27 embed-certs-173264 kubelet[1310]: I1025 09:33:27.441443    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/108ce2e5-6770-4794-a1de-503d2a6ea2a9-xtables-lock\") pod \"kindnet-862lz\" (UID: \"108ce2e5-6770-4794-a1de-503d2a6ea2a9\") " pod="kube-system/kindnet-862lz"
	Oct 25 09:33:27 embed-certs-173264 kubelet[1310]: I1025 09:33:27.441464    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/173eff2d-86b5-4951-9928-37409b52fbab-lib-modules\") pod \"kube-proxy-gwv98\" (UID: \"173eff2d-86b5-4951-9928-37409b52fbab\") " pod="kube-system/kube-proxy-gwv98"
	Oct 25 09:33:27 embed-certs-173264 kubelet[1310]: I1025 09:33:27.441483    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/173eff2d-86b5-4951-9928-37409b52fbab-kube-proxy\") pod \"kube-proxy-gwv98\" (UID: \"173eff2d-86b5-4951-9928-37409b52fbab\") " pod="kube-system/kube-proxy-gwv98"
	Oct 25 09:33:27 embed-certs-173264 kubelet[1310]: I1025 09:33:27.441500    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqckr\" (UniqueName: \"kubernetes.io/projected/173eff2d-86b5-4951-9928-37409b52fbab-kube-api-access-xqckr\") pod \"kube-proxy-gwv98\" (UID: \"173eff2d-86b5-4951-9928-37409b52fbab\") " pod="kube-system/kube-proxy-gwv98"
	Oct 25 09:33:27 embed-certs-173264 kubelet[1310]: I1025 09:33:27.441520    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/108ce2e5-6770-4794-a1de-503d2a6ea2a9-cni-cfg\") pod \"kindnet-862lz\" (UID: \"108ce2e5-6770-4794-a1de-503d2a6ea2a9\") " pod="kube-system/kindnet-862lz"
	Oct 25 09:33:27 embed-certs-173264 kubelet[1310]: I1025 09:33:27.441546    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/108ce2e5-6770-4794-a1de-503d2a6ea2a9-lib-modules\") pod \"kindnet-862lz\" (UID: \"108ce2e5-6770-4794-a1de-503d2a6ea2a9\") " pod="kube-system/kindnet-862lz"
	Oct 25 09:33:27 embed-certs-173264 kubelet[1310]: E1025 09:33:27.624251    1310 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 25 09:33:27 embed-certs-173264 kubelet[1310]: E1025 09:33:27.624296    1310 projected.go:196] Error preparing data for projected volume kube-api-access-x2t9p for pod kube-system/kindnet-862lz: configmap "kube-root-ca.crt" not found
	Oct 25 09:33:27 embed-certs-173264 kubelet[1310]: E1025 09:33:27.624395    1310 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/108ce2e5-6770-4794-a1de-503d2a6ea2a9-kube-api-access-x2t9p podName:108ce2e5-6770-4794-a1de-503d2a6ea2a9 nodeName:}" failed. No retries permitted until 2025-10-25 09:33:28.124349944 +0000 UTC m=+5.944215886 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-x2t9p" (UniqueName: "kubernetes.io/projected/108ce2e5-6770-4794-a1de-503d2a6ea2a9-kube-api-access-x2t9p") pod "kindnet-862lz" (UID: "108ce2e5-6770-4794-a1de-503d2a6ea2a9") : configmap "kube-root-ca.crt" not found
	Oct 25 09:33:27 embed-certs-173264 kubelet[1310]: I1025 09:33:27.714294    1310 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 25 09:33:27 embed-certs-173264 kubelet[1310]: W1025 09:33:27.980741    1310 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/7ab6ed1b9ea6708ddac2c85d334494b29a2d55d0bac7a2069e7f45087d3443ef/crio-0c30c31bc433370970c33a0ffa351feeffc87d6231339eea85ac270a292ef673 WatchSource:0}: Error finding container 0c30c31bc433370970c33a0ffa351feeffc87d6231339eea85ac270a292ef673: Status 404 returned error can't find the container with id 0c30c31bc433370970c33a0ffa351feeffc87d6231339eea85ac270a292ef673
	Oct 25 09:33:28 embed-certs-173264 kubelet[1310]: I1025 09:33:28.896533    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-862lz" podStartSLOduration=1.896512831 podStartE2EDuration="1.896512831s" podCreationTimestamp="2025-10-25 09:33:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:33:28.851474989 +0000 UTC m=+6.671340931" watchObservedRunningTime="2025-10-25 09:33:28.896512831 +0000 UTC m=+6.716378773"
	Oct 25 09:33:31 embed-certs-173264 kubelet[1310]: I1025 09:33:31.636588    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gwv98" podStartSLOduration=4.636556813 podStartE2EDuration="4.636556813s" podCreationTimestamp="2025-10-25 09:33:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:33:28.934320418 +0000 UTC m=+6.754186360" watchObservedRunningTime="2025-10-25 09:33:31.636556813 +0000 UTC m=+9.456422755"
	Oct 25 09:34:08 embed-certs-173264 kubelet[1310]: I1025 09:34:08.934779    1310 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 25 09:34:09 embed-certs-173264 kubelet[1310]: I1025 09:34:09.055752    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd6t9\" (UniqueName: \"kubernetes.io/projected/0f0e1eb2-95c0-4e48-9237-fa235bd6c06d-kube-api-access-bd6t9\") pod \"coredns-66bc5c9577-vgz5x\" (UID: \"0f0e1eb2-95c0-4e48-9237-fa235bd6c06d\") " pod="kube-system/coredns-66bc5c9577-vgz5x"
	Oct 25 09:34:09 embed-certs-173264 kubelet[1310]: I1025 09:34:09.055809    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f0e1eb2-95c0-4e48-9237-fa235bd6c06d-config-volume\") pod \"coredns-66bc5c9577-vgz5x\" (UID: \"0f0e1eb2-95c0-4e48-9237-fa235bd6c06d\") " pod="kube-system/coredns-66bc5c9577-vgz5x"
	Oct 25 09:34:09 embed-certs-173264 kubelet[1310]: I1025 09:34:09.055831    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/21656d87-d41f-4d4c-87aa-5cbf74c12af2-tmp\") pod \"storage-provisioner\" (UID: \"21656d87-d41f-4d4c-87aa-5cbf74c12af2\") " pod="kube-system/storage-provisioner"
	Oct 25 09:34:09 embed-certs-173264 kubelet[1310]: I1025 09:34:09.055856    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fdgc\" (UniqueName: \"kubernetes.io/projected/21656d87-d41f-4d4c-87aa-5cbf74c12af2-kube-api-access-8fdgc\") pod \"storage-provisioner\" (UID: \"21656d87-d41f-4d4c-87aa-5cbf74c12af2\") " pod="kube-system/storage-provisioner"
	Oct 25 09:34:09 embed-certs-173264 kubelet[1310]: W1025 09:34:09.318155    1310 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/7ab6ed1b9ea6708ddac2c85d334494b29a2d55d0bac7a2069e7f45087d3443ef/crio-50067e2de354e6e0e18f54103109f0c26e71dc01c86f5b4e5fa2d0ef34a48b7a WatchSource:0}: Error finding container 50067e2de354e6e0e18f54103109f0c26e71dc01c86f5b4e5fa2d0ef34a48b7a: Status 404 returned error can't find the container with id 50067e2de354e6e0e18f54103109f0c26e71dc01c86f5b4e5fa2d0ef34a48b7a
	Oct 25 09:34:09 embed-certs-173264 kubelet[1310]: I1025 09:34:09.942257    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.942237371 podStartE2EDuration="41.942237371s" podCreationTimestamp="2025-10-25 09:33:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:34:09.940258138 +0000 UTC m=+47.760124080" watchObservedRunningTime="2025-10-25 09:34:09.942237371 +0000 UTC m=+47.762103313"
	Oct 25 09:34:09 embed-certs-173264 kubelet[1310]: I1025 09:34:09.942843    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-vgz5x" podStartSLOduration=42.942827729 podStartE2EDuration="42.942827729s" podCreationTimestamp="2025-10-25 09:33:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:34:09.925310848 +0000 UTC m=+47.745176799" watchObservedRunningTime="2025-10-25 09:34:09.942827729 +0000 UTC m=+47.762693663"
	Oct 25 09:34:12 embed-certs-173264 kubelet[1310]: I1025 09:34:12.178466    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mvtt\" (UniqueName: \"kubernetes.io/projected/bc9e47c5-4d6f-402b-b0c1-7ebe8a846159-kube-api-access-9mvtt\") pod \"busybox\" (UID: \"bc9e47c5-4d6f-402b-b0c1-7ebe8a846159\") " pod="default/busybox"
	
	
	==> storage-provisioner [9c07e0d1e1f548c210b6200aaae98fc8dde0ecfad28b8df66ae9fc80c72886e9] <==
	I1025 09:34:09.360429       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 09:34:09.393290       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 09:34:09.395092       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1025 09:34:09.398786       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:34:09.405920       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 09:34:09.406191       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 09:34:09.406402       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-173264_f88e137b-fbc3-4f40-bf9e-fee2c492ab9c!
	I1025 09:34:09.410981       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f18b8aca-cedc-4b01-aff4-d043bcc5db0c", APIVersion:"v1", ResourceVersion:"457", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-173264_f88e137b-fbc3-4f40-bf9e-fee2c492ab9c became leader
	W1025 09:34:09.418070       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:34:09.439916       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 09:34:09.511318       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-173264_f88e137b-fbc3-4f40-bf9e-fee2c492ab9c!
	W1025 09:34:11.443693       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:34:11.449497       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:34:13.453536       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:34:13.462854       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:34:15.467620       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:34:15.472830       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:34:17.475917       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:34:17.491772       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:34:19.495358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:34:19.503164       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:34:21.513118       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:34:21.523312       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-173264 -n embed-certs-173264
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-173264 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-179869 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p no-preload-179869 --alsologtostderr -v=1: exit status 80 (1.683893373s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-179869 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:35:07.320585  202598 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:35:07.320812  202598 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:35:07.320835  202598 out.go:374] Setting ErrFile to fd 2...
	I1025 09:35:07.320855  202598 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:35:07.321120  202598 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-2312/.minikube/bin
	I1025 09:35:07.321395  202598 out.go:368] Setting JSON to false
	I1025 09:35:07.321445  202598 mustload.go:65] Loading cluster: no-preload-179869
	I1025 09:35:07.321863  202598 config.go:182] Loaded profile config "no-preload-179869": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:35:07.322413  202598 cli_runner.go:164] Run: docker container inspect no-preload-179869 --format={{.State.Status}}
	I1025 09:35:07.347640  202598 host.go:66] Checking if "no-preload-179869" exists ...
	I1025 09:35:07.347944  202598 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:35:07.404712  202598 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-25 09:35:07.394849451 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:35:07.405390  202598 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-179869 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1025 09:35:07.408852  202598 out.go:179] * Pausing node no-preload-179869 ... 
	I1025 09:35:07.411826  202598 host.go:66] Checking if "no-preload-179869" exists ...
	I1025 09:35:07.412159  202598 ssh_runner.go:195] Run: systemctl --version
	I1025 09:35:07.412218  202598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-179869
	I1025 09:35:07.448427  202598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/no-preload-179869/id_rsa Username:docker}
	I1025 09:35:07.553084  202598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:35:07.579550  202598 pause.go:52] kubelet running: true
	I1025 09:35:07.579614  202598 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 09:35:07.817955  202598 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 09:35:07.818080  202598 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 09:35:07.899521  202598 cri.go:89] found id: "02492f2e14bb4c99de4a34ac3755e5a50a8595bf086464df913659f910bc608c"
	I1025 09:35:07.899551  202598 cri.go:89] found id: "df3feec3e7122e0051a6b34445e67aedc3b8e22118eabef976d15e9b8e99540c"
	I1025 09:35:07.899557  202598 cri.go:89] found id: "9c04ac65697af3509d6dea534349d6c8fb0a3f0c0d513b13f56f5066fb198d68"
	I1025 09:35:07.899561  202598 cri.go:89] found id: "6510a348366267527a1c6d9da08ba9302f7233cf6baf4feee121e3108a4111f9"
	I1025 09:35:07.899564  202598 cri.go:89] found id: "184a1a0c95fe5d8445d3fb8d8272b28b468161702f9ecd979fa5d4af4a93e122"
	I1025 09:35:07.899568  202598 cri.go:89] found id: "f5bada5085d9fc38a05b5c28df63b45e3d7d2804b79eb6e9472ffcfe51192fcf"
	I1025 09:35:07.899571  202598 cri.go:89] found id: "322d6c0123c7e7fdc2849fe7f0af01136e262450452ab009ce0b5204f1aa3c61"
	I1025 09:35:07.899575  202598 cri.go:89] found id: "e7e2e32307ca96424361bfd29933a486f201599a8ece9c4103b9b800c7dc2e1e"
	I1025 09:35:07.899579  202598 cri.go:89] found id: "a6cb8feabf010cd74e1dfdfebd8f0990900f05746d51f737f324f6b0f0b15aee"
	I1025 09:35:07.899585  202598 cri.go:89] found id: "0f5c0e42589c00e20e749547249c04c97cb314fd245734ef647815f3c008a16d"
	I1025 09:35:07.899589  202598 cri.go:89] found id: "1fbe8ea6e84e49a7a97e66e41ae18a032f04d7efd3715340156d52f83e56d5f9"
	I1025 09:35:07.899592  202598 cri.go:89] found id: ""
	I1025 09:35:07.899659  202598 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:35:07.910504  202598 retry.go:31] will retry after 266.980922ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:35:07Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:35:08.178086  202598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:35:08.191635  202598 pause.go:52] kubelet running: false
	I1025 09:35:08.191702  202598 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 09:35:08.366640  202598 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 09:35:08.366731  202598 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 09:35:08.443016  202598 cri.go:89] found id: "02492f2e14bb4c99de4a34ac3755e5a50a8595bf086464df913659f910bc608c"
	I1025 09:35:08.443081  202598 cri.go:89] found id: "df3feec3e7122e0051a6b34445e67aedc3b8e22118eabef976d15e9b8e99540c"
	I1025 09:35:08.443103  202598 cri.go:89] found id: "9c04ac65697af3509d6dea534349d6c8fb0a3f0c0d513b13f56f5066fb198d68"
	I1025 09:35:08.443114  202598 cri.go:89] found id: "6510a348366267527a1c6d9da08ba9302f7233cf6baf4feee121e3108a4111f9"
	I1025 09:35:08.443119  202598 cri.go:89] found id: "184a1a0c95fe5d8445d3fb8d8272b28b468161702f9ecd979fa5d4af4a93e122"
	I1025 09:35:08.443136  202598 cri.go:89] found id: "f5bada5085d9fc38a05b5c28df63b45e3d7d2804b79eb6e9472ffcfe51192fcf"
	I1025 09:35:08.443140  202598 cri.go:89] found id: "322d6c0123c7e7fdc2849fe7f0af01136e262450452ab009ce0b5204f1aa3c61"
	I1025 09:35:08.443158  202598 cri.go:89] found id: "e7e2e32307ca96424361bfd29933a486f201599a8ece9c4103b9b800c7dc2e1e"
	I1025 09:35:08.443169  202598 cri.go:89] found id: "a6cb8feabf010cd74e1dfdfebd8f0990900f05746d51f737f324f6b0f0b15aee"
	I1025 09:35:08.443176  202598 cri.go:89] found id: "0f5c0e42589c00e20e749547249c04c97cb314fd245734ef647815f3c008a16d"
	I1025 09:35:08.443179  202598 cri.go:89] found id: "1fbe8ea6e84e49a7a97e66e41ae18a032f04d7efd3715340156d52f83e56d5f9"
	I1025 09:35:08.443182  202598 cri.go:89] found id: ""
	I1025 09:35:08.443243  202598 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:35:08.454883  202598 retry.go:31] will retry after 217.493089ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:35:08Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:35:08.673331  202598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:35:08.686739  202598 pause.go:52] kubelet running: false
	I1025 09:35:08.686821  202598 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 09:35:08.858312  202598 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 09:35:08.858411  202598 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 09:35:08.929368  202598 cri.go:89] found id: "02492f2e14bb4c99de4a34ac3755e5a50a8595bf086464df913659f910bc608c"
	I1025 09:35:08.929392  202598 cri.go:89] found id: "df3feec3e7122e0051a6b34445e67aedc3b8e22118eabef976d15e9b8e99540c"
	I1025 09:35:08.929397  202598 cri.go:89] found id: "9c04ac65697af3509d6dea534349d6c8fb0a3f0c0d513b13f56f5066fb198d68"
	I1025 09:35:08.929400  202598 cri.go:89] found id: "6510a348366267527a1c6d9da08ba9302f7233cf6baf4feee121e3108a4111f9"
	I1025 09:35:08.929404  202598 cri.go:89] found id: "184a1a0c95fe5d8445d3fb8d8272b28b468161702f9ecd979fa5d4af4a93e122"
	I1025 09:35:08.929407  202598 cri.go:89] found id: "f5bada5085d9fc38a05b5c28df63b45e3d7d2804b79eb6e9472ffcfe51192fcf"
	I1025 09:35:08.929411  202598 cri.go:89] found id: "322d6c0123c7e7fdc2849fe7f0af01136e262450452ab009ce0b5204f1aa3c61"
	I1025 09:35:08.929414  202598 cri.go:89] found id: "e7e2e32307ca96424361bfd29933a486f201599a8ece9c4103b9b800c7dc2e1e"
	I1025 09:35:08.929418  202598 cri.go:89] found id: "a6cb8feabf010cd74e1dfdfebd8f0990900f05746d51f737f324f6b0f0b15aee"
	I1025 09:35:08.929424  202598 cri.go:89] found id: "0f5c0e42589c00e20e749547249c04c97cb314fd245734ef647815f3c008a16d"
	I1025 09:35:08.929428  202598 cri.go:89] found id: "1fbe8ea6e84e49a7a97e66e41ae18a032f04d7efd3715340156d52f83e56d5f9"
	I1025 09:35:08.929431  202598 cri.go:89] found id: ""
	I1025 09:35:08.929480  202598 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:35:08.943612  202598 out.go:203] 
	W1025 09:35:08.946480  202598 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:35:08Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:35:08Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:35:08.946501  202598 out.go:285] * 
	* 
	W1025 09:35:08.951381  202598 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:35:08.954397  202598 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p no-preload-179869 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-179869
helpers_test.go:243: (dbg) docker inspect no-preload-179869:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "021c28390d46e140cb149a266cb60178228c9cf40f85185de0d05f970fd0ddea",
	        "Created": "2025-10-25T09:32:25.431032619Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 197592,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:34:07.103924817Z",
	            "FinishedAt": "2025-10-25T09:34:06.32180239Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/021c28390d46e140cb149a266cb60178228c9cf40f85185de0d05f970fd0ddea/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/021c28390d46e140cb149a266cb60178228c9cf40f85185de0d05f970fd0ddea/hostname",
	        "HostsPath": "/var/lib/docker/containers/021c28390d46e140cb149a266cb60178228c9cf40f85185de0d05f970fd0ddea/hosts",
	        "LogPath": "/var/lib/docker/containers/021c28390d46e140cb149a266cb60178228c9cf40f85185de0d05f970fd0ddea/021c28390d46e140cb149a266cb60178228c9cf40f85185de0d05f970fd0ddea-json.log",
	        "Name": "/no-preload-179869",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-179869:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-179869",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "021c28390d46e140cb149a266cb60178228c9cf40f85185de0d05f970fd0ddea",
	                "LowerDir": "/var/lib/docker/overlay2/81e00092661e10c44ffb145286208642057ce877d4a86b73f561cb203e788f89-init/diff:/var/lib/docker/overlay2/cef79fc16ca3e257f9c9390d34fae091a7f7fa219913b2606caf1922ea50ed93/diff",
	                "MergedDir": "/var/lib/docker/overlay2/81e00092661e10c44ffb145286208642057ce877d4a86b73f561cb203e788f89/merged",
	                "UpperDir": "/var/lib/docker/overlay2/81e00092661e10c44ffb145286208642057ce877d4a86b73f561cb203e788f89/diff",
	                "WorkDir": "/var/lib/docker/overlay2/81e00092661e10c44ffb145286208642057ce877d4a86b73f561cb203e788f89/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-179869",
	                "Source": "/var/lib/docker/volumes/no-preload-179869/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-179869",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-179869",
	                "name.minikube.sigs.k8s.io": "no-preload-179869",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3e152feee922a9f07e18b51818e3424a4b11cf77c51f7ec53d27ea402d13daa5",
	            "SandboxKey": "/var/run/docker/netns/3e152feee922",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-179869": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "de:4c:72:bf:56:55",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ff99d2418ad390d8ccdf5911c4bca3c6d1626ffae4866e35866344c13c51df93",
	                    "EndpointID": "fca1557c35edda6325f1ba867b06b369dc934809174321ad250ddd24b25e5ea5",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-179869",
	                        "021c28390d46"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-179869 -n no-preload-179869
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-179869 -n no-preload-179869: exit status 2 (363.748555ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-179869 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-179869 logs -n 25: (1.309791529s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cert-options-483456 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-483456    │ jenkins │ v1.37.0 │ 25 Oct 25 09:29 UTC │ 25 Oct 25 09:29 UTC │
	│ delete  │ -p cert-options-483456                                                                                                                                                                                                                        │ cert-options-483456    │ jenkins │ v1.37.0 │ 25 Oct 25 09:29 UTC │ 25 Oct 25 09:29 UTC │
	│ start   │ -p old-k8s-version-881642 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-881642 │ jenkins │ v1.37.0 │ 25 Oct 25 09:29 UTC │ 25 Oct 25 09:30 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-881642 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-881642 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │                     │
	│ stop    │ -p old-k8s-version-881642 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-881642 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │ 25 Oct 25 09:31 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-881642 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-881642 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │ 25 Oct 25 09:31 UTC │
	│ start   │ -p old-k8s-version-881642 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-881642 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │ 25 Oct 25 09:32 UTC │
	│ image   │ old-k8s-version-881642 image list --format=json                                                                                                                                                                                               │ old-k8s-version-881642 │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:32 UTC │
	│ pause   │ -p old-k8s-version-881642 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-881642 │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │                     │
	│ start   │ -p cert-expiration-440252 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-440252 │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:32 UTC │
	│ delete  │ -p old-k8s-version-881642                                                                                                                                                                                                                     │ old-k8s-version-881642 │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:32 UTC │
	│ delete  │ -p old-k8s-version-881642                                                                                                                                                                                                                     │ old-k8s-version-881642 │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:32 UTC │
	│ start   │ -p no-preload-179869 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-179869      │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:33 UTC │
	│ delete  │ -p cert-expiration-440252                                                                                                                                                                                                                     │ cert-expiration-440252 │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:32 UTC │
	│ start   │ -p embed-certs-173264 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-173264     │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:34 UTC │
	│ addons  │ enable metrics-server -p no-preload-179869 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-179869      │ jenkins │ v1.37.0 │ 25 Oct 25 09:33 UTC │                     │
	│ stop    │ -p no-preload-179869 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-179869      │ jenkins │ v1.37.0 │ 25 Oct 25 09:33 UTC │ 25 Oct 25 09:34 UTC │
	│ addons  │ enable dashboard -p no-preload-179869 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-179869      │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │ 25 Oct 25 09:34 UTC │
	│ start   │ -p no-preload-179869 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-179869      │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │ 25 Oct 25 09:34 UTC │
	│ addons  │ enable metrics-server -p embed-certs-173264 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-173264     │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │                     │
	│ stop    │ -p embed-certs-173264 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-173264     │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │ 25 Oct 25 09:34 UTC │
	│ addons  │ enable dashboard -p embed-certs-173264 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-173264     │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │ 25 Oct 25 09:34 UTC │
	│ start   │ -p embed-certs-173264 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-173264     │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │                     │
	│ image   │ no-preload-179869 image list --format=json                                                                                                                                                                                                    │ no-preload-179869      │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
	│ pause   │ -p no-preload-179869 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-179869      │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:34:35
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:34:35.900519  200380 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:34:35.900772  200380 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:34:35.900800  200380 out.go:374] Setting ErrFile to fd 2...
	I1025 09:34:35.900820  200380 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:34:35.901127  200380 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-2312/.minikube/bin
	I1025 09:34:35.901562  200380 out.go:368] Setting JSON to false
	I1025 09:34:35.902662  200380 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4627,"bootTime":1761380249,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1025 09:34:35.902768  200380 start.go:141] virtualization:  
	I1025 09:34:35.905585  200380 out.go:179] * [embed-certs-173264] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 09:34:35.909134  200380 out.go:179]   - MINIKUBE_LOCATION=21796
	I1025 09:34:35.909224  200380 notify.go:220] Checking for updates...
	I1025 09:34:35.914662  200380 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:34:35.917469  200380 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21796-2312/kubeconfig
	I1025 09:34:35.920253  200380 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-2312/.minikube
	I1025 09:34:35.923040  200380 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 09:34:35.925821  200380 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:34:35.929118  200380 config.go:182] Loaded profile config "embed-certs-173264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:34:35.929788  200380 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:34:35.954907  200380 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 09:34:35.955025  200380 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:34:36.028788  200380 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-25 09:34:36.01783765 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:34:36.028907  200380 docker.go:318] overlay module found
	I1025 09:34:36.032238  200380 out.go:179] * Using the docker driver based on existing profile
	I1025 09:34:36.035022  200380 start.go:305] selected driver: docker
	I1025 09:34:36.035044  200380 start.go:925] validating driver "docker" against &{Name:embed-certs-173264 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-173264 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:34:36.035162  200380 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:34:36.036000  200380 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:34:36.095404  200380 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-25 09:34:36.085004839 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:34:36.095757  200380 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:34:36.095792  200380 cni.go:84] Creating CNI manager for ""
	I1025 09:34:36.095851  200380 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:34:36.095891  200380 start.go:349] cluster config:
	{Name:embed-certs-173264 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-173264 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:34:36.101058  200380 out.go:179] * Starting "embed-certs-173264" primary control-plane node in "embed-certs-173264" cluster
	I1025 09:34:36.103888  200380 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 09:34:36.106762  200380 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:34:36.109571  200380 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:34:36.109627  200380 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21796-2312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1025 09:34:36.109641  200380 cache.go:58] Caching tarball of preloaded images
	I1025 09:34:36.109666  200380 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:34:36.109727  200380 preload.go:233] Found /home/jenkins/minikube-integration/21796-2312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 09:34:36.109736  200380 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 09:34:36.109845  200380 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/embed-certs-173264/config.json ...
	I1025 09:34:36.133976  200380 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 09:34:36.134045  200380 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 09:34:36.134070  200380 cache.go:232] Successfully downloaded all kic artifacts
	I1025 09:34:36.134100  200380 start.go:360] acquireMachinesLock for embed-certs-173264: {Name:mke81dcd321ea4fd5503be9a5895c5ebc5dee6a7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:34:36.134169  200380 start.go:364] duration metric: took 50.782µs to acquireMachinesLock for "embed-certs-173264"
	I1025 09:34:36.134190  200380 start.go:96] Skipping create...Using existing machine configuration
	I1025 09:34:36.134195  200380 fix.go:54] fixHost starting: 
	I1025 09:34:36.134454  200380 cli_runner.go:164] Run: docker container inspect embed-certs-173264 --format={{.State.Status}}
	I1025 09:34:36.151089  200380 fix.go:112] recreateIfNeeded on embed-certs-173264: state=Stopped err=<nil>
	W1025 09:34:36.151121  200380 fix.go:138] unexpected machine state, will restart: <nil>
	W1025 09:34:33.878806  197465 pod_ready.go:104] pod "coredns-66bc5c9577-b266v" is not "Ready", error: <nil>
	W1025 09:34:35.880265  197465 pod_ready.go:104] pod "coredns-66bc5c9577-b266v" is not "Ready", error: <nil>
	I1025 09:34:36.154183  200380 out.go:252] * Restarting existing docker container for "embed-certs-173264" ...
	I1025 09:34:36.154275  200380 cli_runner.go:164] Run: docker start embed-certs-173264
	I1025 09:34:36.401090  200380 cli_runner.go:164] Run: docker container inspect embed-certs-173264 --format={{.State.Status}}
	I1025 09:34:36.428711  200380 kic.go:430] container "embed-certs-173264" state is running.
	I1025 09:34:36.429237  200380 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-173264
	I1025 09:34:36.450712  200380 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/embed-certs-173264/config.json ...
	I1025 09:34:36.451069  200380 machine.go:93] provisionDockerMachine start ...
	I1025 09:34:36.451343  200380 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-173264
	I1025 09:34:36.472303  200380 main.go:141] libmachine: Using SSH client type: native
	I1025 09:34:36.472704  200380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1025 09:34:36.472717  200380 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:34:36.473461  200380 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1025 09:34:39.625653  200380 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-173264
	
	I1025 09:34:39.625693  200380 ubuntu.go:182] provisioning hostname "embed-certs-173264"
	I1025 09:34:39.625755  200380 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-173264
	I1025 09:34:39.645026  200380 main.go:141] libmachine: Using SSH client type: native
	I1025 09:34:39.645345  200380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1025 09:34:39.645362  200380 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-173264 && echo "embed-certs-173264" | sudo tee /etc/hostname
	I1025 09:34:39.812340  200380 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-173264
	
	I1025 09:34:39.812431  200380 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-173264
	I1025 09:34:39.830647  200380 main.go:141] libmachine: Using SSH client type: native
	I1025 09:34:39.830959  200380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1025 09:34:39.830984  200380 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-173264' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-173264/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-173264' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:34:39.990344  200380 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:34:39.990371  200380 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21796-2312/.minikube CaCertPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21796-2312/.minikube}
	I1025 09:34:39.990410  200380 ubuntu.go:190] setting up certificates
	I1025 09:34:39.990419  200380 provision.go:84] configureAuth start
	I1025 09:34:39.990478  200380 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-173264
	I1025 09:34:40.032656  200380 provision.go:143] copyHostCerts
	I1025 09:34:40.032748  200380 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-2312/.minikube/ca.pem, removing ...
	I1025 09:34:40.032768  200380 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-2312/.minikube/ca.pem
	I1025 09:34:40.032867  200380 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21796-2312/.minikube/ca.pem (1082 bytes)
	I1025 09:34:40.033033  200380 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-2312/.minikube/cert.pem, removing ...
	I1025 09:34:40.033039  200380 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-2312/.minikube/cert.pem
	I1025 09:34:40.033069  200380 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21796-2312/.minikube/cert.pem (1123 bytes)
	I1025 09:34:40.033132  200380 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-2312/.minikube/key.pem, removing ...
	I1025 09:34:40.033136  200380 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-2312/.minikube/key.pem
	I1025 09:34:40.033159  200380 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21796-2312/.minikube/key.pem (1679 bytes)
	I1025 09:34:40.033213  200380 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21796-2312/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca-key.pem org=jenkins.embed-certs-173264 san=[127.0.0.1 192.168.85.2 embed-certs-173264 localhost minikube]
	I1025 09:34:40.312558  200380 provision.go:177] copyRemoteCerts
	I1025 09:34:40.312634  200380 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:34:40.312684  200380 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-173264
	I1025 09:34:40.334006  200380 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/embed-certs-173264/id_rsa Username:docker}
	I1025 09:34:40.441922  200380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1025 09:34:40.461686  200380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 09:34:40.481318  200380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 09:34:40.498826  200380 provision.go:87] duration metric: took 508.384602ms to configureAuth
	I1025 09:34:40.498896  200380 ubuntu.go:206] setting minikube options for container-runtime
	I1025 09:34:40.499108  200380 config.go:182] Loaded profile config "embed-certs-173264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:34:40.499244  200380 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-173264
	I1025 09:34:40.517648  200380 main.go:141] libmachine: Using SSH client type: native
	I1025 09:34:40.518093  200380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1025 09:34:40.518126  200380 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 09:34:40.854123  200380 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:34:40.854150  200380 machine.go:96] duration metric: took 4.403067865s to provisionDockerMachine
	I1025 09:34:40.854162  200380 start.go:293] postStartSetup for "embed-certs-173264" (driver="docker")
	I1025 09:34:40.854191  200380 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:34:40.854269  200380 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:34:40.854338  200380 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-173264
	I1025 09:34:40.878782  200380 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/embed-certs-173264/id_rsa Username:docker}
	W1025 09:34:38.380012  197465 pod_ready.go:104] pod "coredns-66bc5c9577-b266v" is not "Ready", error: <nil>
	W1025 09:34:40.882362  197465 pod_ready.go:104] pod "coredns-66bc5c9577-b266v" is not "Ready", error: <nil>
	I1025 09:34:40.982409  200380 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:34:40.985918  200380 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 09:34:40.985944  200380 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 09:34:40.985954  200380 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-2312/.minikube/addons for local assets ...
	I1025 09:34:40.986051  200380 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-2312/.minikube/files for local assets ...
	I1025 09:34:40.986130  200380 filesync.go:149] local asset: /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem -> 41102.pem in /etc/ssl/certs
	I1025 09:34:40.986237  200380 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 09:34:40.993739  200380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem --> /etc/ssl/certs/41102.pem (1708 bytes)
	I1025 09:34:41.014793  200380 start.go:296] duration metric: took 160.616504ms for postStartSetup
	I1025 09:34:41.014916  200380 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:34:41.015009  200380 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-173264
	I1025 09:34:41.033708  200380 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/embed-certs-173264/id_rsa Username:docker}
	I1025 09:34:41.134932  200380 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 09:34:41.139615  200380 fix.go:56] duration metric: took 5.005412757s for fixHost
	I1025 09:34:41.139645  200380 start.go:83] releasing machines lock for "embed-certs-173264", held for 5.005465427s
	I1025 09:34:41.139713  200380 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-173264
	I1025 09:34:41.157609  200380 ssh_runner.go:195] Run: cat /version.json
	I1025 09:34:41.157669  200380 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-173264
	I1025 09:34:41.157790  200380 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:34:41.157851  200380 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-173264
	I1025 09:34:41.175328  200380 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/embed-certs-173264/id_rsa Username:docker}
	I1025 09:34:41.177712  200380 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/embed-certs-173264/id_rsa Username:docker}
	I1025 09:34:41.370714  200380 ssh_runner.go:195] Run: systemctl --version
	I1025 09:34:41.377740  200380 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:34:41.416472  200380 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:34:41.420792  200380 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:34:41.420861  200380 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:34:41.429152  200380 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 09:34:41.429176  200380 start.go:495] detecting cgroup driver to use...
	I1025 09:34:41.429208  200380 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 09:34:41.429258  200380 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:34:41.444190  200380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:34:41.457399  200380 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:34:41.457495  200380 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:34:41.473396  200380 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:34:41.487187  200380 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:34:41.604246  200380 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:34:41.729466  200380 docker.go:234] disabling docker service ...
	I1025 09:34:41.729530  200380 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:34:41.745584  200380 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:34:41.758647  200380 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:34:41.876956  200380 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:34:41.998325  200380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:34:42.030826  200380 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:34:42.047185  200380 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 09:34:42.047325  200380 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:34:42.057090  200380 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 09:34:42.057165  200380 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:34:42.067156  200380 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:34:42.078330  200380 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:34:42.090118  200380 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:34:42.100367  200380 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:34:42.111713  200380 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:34:42.123569  200380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:34:42.134842  200380 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:34:42.144675  200380 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:34:42.154440  200380 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:34:42.296736  200380 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:34:42.440089  200380 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:34:42.440182  200380 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:34:42.444217  200380 start.go:563] Will wait 60s for crictl version
	I1025 09:34:42.444336  200380 ssh_runner.go:195] Run: which crictl
	I1025 09:34:42.448015  200380 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 09:34:42.477506  200380 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 09:34:42.477637  200380 ssh_runner.go:195] Run: crio --version
	I1025 09:34:42.511940  200380 ssh_runner.go:195] Run: crio --version
	I1025 09:34:42.543404  200380 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 09:34:42.546162  200380 cli_runner.go:164] Run: docker network inspect embed-certs-173264 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:34:42.562764  200380 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1025 09:34:42.566481  200380 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:34:42.575986  200380 kubeadm.go:883] updating cluster {Name:embed-certs-173264 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-173264 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 09:34:42.576099  200380 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:34:42.576152  200380 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:34:42.608587  200380 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:34:42.608619  200380 crio.go:433] Images already preloaded, skipping extraction
	I1025 09:34:42.608674  200380 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:34:42.637913  200380 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:34:42.637990  200380 cache_images.go:85] Images are preloaded, skipping loading
	I1025 09:34:42.638012  200380 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1025 09:34:42.638166  200380 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-173264 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-173264 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 09:34:42.638261  200380 ssh_runner.go:195] Run: crio config
	I1025 09:34:42.701396  200380 cni.go:84] Creating CNI manager for ""
	I1025 09:34:42.701430  200380 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:34:42.701460  200380 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 09:34:42.701508  200380 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-173264 NodeName:embed-certs-173264 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:34:42.701675  200380 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-173264"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:34:42.701772  200380 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 09:34:42.709564  200380 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 09:34:42.709635  200380 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:34:42.716931  200380 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1025 09:34:42.729668  200380 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:34:42.742224  200380 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1025 09:34:42.755924  200380 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1025 09:34:42.759853  200380 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:34:42.769678  200380 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:34:42.882548  200380 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:34:42.898300  200380 certs.go:69] Setting up /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/embed-certs-173264 for IP: 192.168.85.2
	I1025 09:34:42.898321  200380 certs.go:195] generating shared ca certs ...
	I1025 09:34:42.898337  200380 certs.go:227] acquiring lock for ca certs: {Name:mke81a91ad41248e67f7acba9fb2e71e1c110e7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:34:42.898552  200380 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21796-2312/.minikube/ca.key
	I1025 09:34:42.898621  200380 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.key
	I1025 09:34:42.898632  200380 certs.go:257] generating profile certs ...
	I1025 09:34:42.898737  200380 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/embed-certs-173264/client.key
	I1025 09:34:42.898823  200380 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/embed-certs-173264/apiserver.key.cec4835f
	I1025 09:34:42.898894  200380 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/embed-certs-173264/proxy-client.key
	I1025 09:34:42.899040  200380 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/4110.pem (1338 bytes)
	W1025 09:34:42.899090  200380 certs.go:480] ignoring /home/jenkins/minikube-integration/21796-2312/.minikube/certs/4110_empty.pem, impossibly tiny 0 bytes
	I1025 09:34:42.899105  200380 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 09:34:42.899133  200380 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem (1082 bytes)
	I1025 09:34:42.899189  200380 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:34:42.899220  200380 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/key.pem (1679 bytes)
	I1025 09:34:42.899284  200380 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem (1708 bytes)
	I1025 09:34:42.899870  200380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:34:42.926753  200380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 09:34:42.947085  200380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:34:42.978833  200380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 09:34:43.000791  200380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/embed-certs-173264/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1025 09:34:43.022233  200380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/embed-certs-173264/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 09:34:43.046852  200380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/embed-certs-173264/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:34:43.072016  200380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/embed-certs-173264/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 09:34:43.097218  200380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:34:43.125412  200380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/certs/4110.pem --> /usr/share/ca-certificates/4110.pem (1338 bytes)
	I1025 09:34:43.153127  200380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem --> /usr/share/ca-certificates/41102.pem (1708 bytes)
	I1025 09:34:43.172754  200380 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:34:43.189003  200380 ssh_runner.go:195] Run: openssl version
	I1025 09:34:43.196652  200380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:34:43.205745  200380 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:34:43.209550  200380 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:34:43.209632  200380 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:34:43.253804  200380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:34:43.261654  200380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4110.pem && ln -fs /usr/share/ca-certificates/4110.pem /etc/ssl/certs/4110.pem"
	I1025 09:34:43.269677  200380 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4110.pem
	I1025 09:34:43.273355  200380 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 08:37 /usr/share/ca-certificates/4110.pem
	I1025 09:34:43.273415  200380 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4110.pem
	I1025 09:34:43.319092  200380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4110.pem /etc/ssl/certs/51391683.0"
	I1025 09:34:43.326755  200380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41102.pem && ln -fs /usr/share/ca-certificates/41102.pem /etc/ssl/certs/41102.pem"
	I1025 09:34:43.335043  200380 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41102.pem
	I1025 09:34:43.338953  200380 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 08:37 /usr/share/ca-certificates/41102.pem
	I1025 09:34:43.339017  200380 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41102.pem
	I1025 09:34:43.380989  200380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41102.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 09:34:43.389710  200380 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:34:43.394110  200380 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 09:34:43.435752  200380 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 09:34:43.477474  200380 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 09:34:43.522977  200380 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 09:34:43.575053  200380 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 09:34:43.644584  200380 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 09:34:43.700882  200380 kubeadm.go:400] StartCluster: {Name:embed-certs-173264 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-173264 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:34:43.700992  200380 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:34:43.701060  200380 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:34:43.770432  200380 cri.go:89] found id: "d6ad6127ca83d1792eb0f03aca451cdd0a78c05c4baaecc9fd3ec902ddd40c88"
	I1025 09:34:43.770471  200380 cri.go:89] found id: "8f686a2912e6c0a8d6e4d5311cba470140c0c77f3d59d36367a840a7e2c18a5b"
	I1025 09:34:43.770477  200380 cri.go:89] found id: "0bd9ad4a667885f7374b118b9cdffe51d851fae7ec99302cc3b3126ee7b47b5a"
	I1025 09:34:43.770481  200380 cri.go:89] found id: ""
	I1025 09:34:43.770536  200380 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 09:34:43.787425  200380 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:34:43Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:34:43.787513  200380 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 09:34:43.806769  200380 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 09:34:43.806787  200380 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 09:34:43.806850  200380 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 09:34:43.826245  200380 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 09:34:43.826841  200380 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-173264" does not appear in /home/jenkins/minikube-integration/21796-2312/kubeconfig
	I1025 09:34:43.827122  200380 kubeconfig.go:62] /home/jenkins/minikube-integration/21796-2312/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-173264" cluster setting kubeconfig missing "embed-certs-173264" context setting]
	I1025 09:34:43.827673  200380 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/kubeconfig: {Name:mk29749a0edbb941c188588ea6aef25a517ab571 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:34:43.829062  200380 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 09:34:43.844782  200380 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1025 09:34:43.844816  200380 kubeadm.go:601] duration metric: took 38.01646ms to restartPrimaryControlPlane
	I1025 09:34:43.844825  200380 kubeadm.go:402] duration metric: took 143.952796ms to StartCluster
	I1025 09:34:43.844844  200380 settings.go:142] acquiring lock: {Name:mk0d8a31751f5e359928da7d7271367bd4b397fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:34:43.844908  200380 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21796-2312/kubeconfig
	I1025 09:34:43.846240  200380 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/kubeconfig: {Name:mk29749a0edbb941c188588ea6aef25a517ab571 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:34:43.846462  200380 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:34:43.846762  200380 config.go:182] Loaded profile config "embed-certs-173264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:34:43.846819  200380 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 09:34:43.846889  200380 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-173264"
	I1025 09:34:43.846902  200380 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-173264"
	W1025 09:34:43.846908  200380 addons.go:247] addon storage-provisioner should already be in state true
	I1025 09:34:43.846938  200380 host.go:66] Checking if "embed-certs-173264" exists ...
	I1025 09:34:43.846967  200380 addons.go:69] Setting dashboard=true in profile "embed-certs-173264"
	I1025 09:34:43.847021  200380 addons.go:238] Setting addon dashboard=true in "embed-certs-173264"
	W1025 09:34:43.847042  200380 addons.go:247] addon dashboard should already be in state true
	I1025 09:34:43.847099  200380 host.go:66] Checking if "embed-certs-173264" exists ...
	I1025 09:34:43.847398  200380 cli_runner.go:164] Run: docker container inspect embed-certs-173264 --format={{.State.Status}}
	I1025 09:34:43.847704  200380 cli_runner.go:164] Run: docker container inspect embed-certs-173264 --format={{.State.Status}}
	I1025 09:34:43.850085  200380 addons.go:69] Setting default-storageclass=true in profile "embed-certs-173264"
	I1025 09:34:43.850264  200380 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-173264"
	I1025 09:34:43.851334  200380 cli_runner.go:164] Run: docker container inspect embed-certs-173264 --format={{.State.Status}}
	I1025 09:34:43.858382  200380 out.go:179] * Verifying Kubernetes components...
	I1025 09:34:43.862226  200380 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:34:43.905425  200380 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1025 09:34:43.908335  200380 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1025 09:34:43.908442  200380 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 09:34:43.911179  200380 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:34:43.911200  200380 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 09:34:43.911266  200380 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-173264
	I1025 09:34:43.911403  200380 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1025 09:34:43.911413  200380 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1025 09:34:43.911446  200380 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-173264
	I1025 09:34:43.949312  200380 addons.go:238] Setting addon default-storageclass=true in "embed-certs-173264"
	W1025 09:34:43.949338  200380 addons.go:247] addon default-storageclass should already be in state true
	I1025 09:34:43.949361  200380 host.go:66] Checking if "embed-certs-173264" exists ...
	I1025 09:34:43.949817  200380 cli_runner.go:164] Run: docker container inspect embed-certs-173264 --format={{.State.Status}}
	I1025 09:34:43.972151  200380 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/embed-certs-173264/id_rsa Username:docker}
	I1025 09:34:43.990353  200380 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/embed-certs-173264/id_rsa Username:docker}
	I1025 09:34:43.998358  200380 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 09:34:43.998380  200380 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 09:34:43.998443  200380 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-173264
	I1025 09:34:44.027918  200380 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/embed-certs-173264/id_rsa Username:docker}
	I1025 09:34:44.207116  200380 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:34:44.241207  200380 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:34:44.296028  200380 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 09:34:44.311112  200380 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1025 09:34:44.311143  200380 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1025 09:34:44.343681  200380 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1025 09:34:44.343713  200380 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1025 09:34:44.445575  200380 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1025 09:34:44.445599  200380 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1025 09:34:44.513451  200380 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1025 09:34:44.513475  200380 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1025 09:34:44.529607  200380 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1025 09:34:44.529631  200380 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1025 09:34:44.544886  200380 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1025 09:34:44.544908  200380 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1025 09:34:44.559901  200380 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1025 09:34:44.559924  200380 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1025 09:34:44.578857  200380 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1025 09:34:44.578881  200380 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1025 09:34:44.598828  200380 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 09:34:44.598851  200380 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1025 09:34:44.618997  200380 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1025 09:34:43.380391  197465 pod_ready.go:104] pod "coredns-66bc5c9577-b266v" is not "Ready", error: <nil>
	W1025 09:34:45.381520  197465 pod_ready.go:104] pod "coredns-66bc5c9577-b266v" is not "Ready", error: <nil>
	I1025 09:34:50.236048  200380 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.994763458s)
	I1025 09:34:50.236095  200380 node_ready.go:35] waiting up to 6m0s for node "embed-certs-173264" to be "Ready" ...
	I1025 09:34:50.236392  200380 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.940336745s)
	I1025 09:34:50.236651  200380 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.617608499s)
	I1025 09:34:50.236937  200380 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.029739765s)
	I1025 09:34:50.239779  200380 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-173264 addons enable metrics-server
	
	I1025 09:34:50.259812  200380 node_ready.go:49] node "embed-certs-173264" is "Ready"
	I1025 09:34:50.259911  200380 node_ready.go:38] duration metric: took 23.783192ms for node "embed-certs-173264" to be "Ready" ...
	I1025 09:34:50.259939  200380 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:34:50.260025  200380 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:34:50.275415  200380 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1025 09:34:50.278375  200380 addons.go:514] duration metric: took 6.431549385s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1025 09:34:50.283052  200380 api_server.go:72] duration metric: took 6.436556552s to wait for apiserver process to appear ...
	I1025 09:34:50.283128  200380 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:34:50.283172  200380 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:34:50.292448  200380 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1025 09:34:50.293751  200380 api_server.go:141] control plane version: v1.34.1
	I1025 09:34:50.293786  200380 api_server.go:131] duration metric: took 10.629546ms to wait for apiserver health ...
	I1025 09:34:50.293797  200380 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 09:34:50.297262  200380 system_pods.go:59] 8 kube-system pods found
	I1025 09:34:50.297306  200380 system_pods.go:61] "coredns-66bc5c9577-vgz5x" [0f0e1eb2-95c0-4e48-9237-fa235bd6c06d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:34:50.297316  200380 system_pods.go:61] "etcd-embed-certs-173264" [614385a1-5378-4984-b048-8d85c96938f2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 09:34:50.297322  200380 system_pods.go:61] "kindnet-862lz" [108ce2e5-6770-4794-a1de-503d2a6ea2a9] Running
	I1025 09:34:50.297330  200380 system_pods.go:61] "kube-apiserver-embed-certs-173264" [ff2383a6-cf45-400b-a449-82c480d2345e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 09:34:50.297350  200380 system_pods.go:61] "kube-controller-manager-embed-certs-173264" [53385786-a0a7-40cc-9e25-ba2224c653bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 09:34:50.297358  200380 system_pods.go:61] "kube-proxy-gwv98" [173eff2d-86b5-4951-9928-37409b52fbab] Running
	I1025 09:34:50.297367  200380 system_pods.go:61] "kube-scheduler-embed-certs-173264" [cbf80061-881a-4991-b7d4-f04920872558] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 09:34:50.297375  200380 system_pods.go:61] "storage-provisioner" [21656d87-d41f-4d4c-87aa-5cbf74c12af2] Running
	I1025 09:34:50.297381  200380 system_pods.go:74] duration metric: took 3.578473ms to wait for pod list to return data ...
	I1025 09:34:50.297389  200380 default_sa.go:34] waiting for default service account to be created ...
	I1025 09:34:50.300105  200380 default_sa.go:45] found service account: "default"
	I1025 09:34:50.300132  200380 default_sa.go:55] duration metric: took 2.72974ms for default service account to be created ...
	I1025 09:34:50.300142  200380 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 09:34:50.303744  200380 system_pods.go:86] 8 kube-system pods found
	I1025 09:34:50.303780  200380 system_pods.go:89] "coredns-66bc5c9577-vgz5x" [0f0e1eb2-95c0-4e48-9237-fa235bd6c06d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:34:50.303790  200380 system_pods.go:89] "etcd-embed-certs-173264" [614385a1-5378-4984-b048-8d85c96938f2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 09:34:50.303796  200380 system_pods.go:89] "kindnet-862lz" [108ce2e5-6770-4794-a1de-503d2a6ea2a9] Running
	I1025 09:34:50.303803  200380 system_pods.go:89] "kube-apiserver-embed-certs-173264" [ff2383a6-cf45-400b-a449-82c480d2345e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 09:34:50.303810  200380 system_pods.go:89] "kube-controller-manager-embed-certs-173264" [53385786-a0a7-40cc-9e25-ba2224c653bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 09:34:50.303821  200380 system_pods.go:89] "kube-proxy-gwv98" [173eff2d-86b5-4951-9928-37409b52fbab] Running
	I1025 09:34:50.303828  200380 system_pods.go:89] "kube-scheduler-embed-certs-173264" [cbf80061-881a-4991-b7d4-f04920872558] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 09:34:50.303835  200380 system_pods.go:89] "storage-provisioner" [21656d87-d41f-4d4c-87aa-5cbf74c12af2] Running
	I1025 09:34:50.303842  200380 system_pods.go:126] duration metric: took 3.694519ms to wait for k8s-apps to be running ...
	I1025 09:34:50.303854  200380 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 09:34:50.303913  200380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:34:50.323203  200380 system_svc.go:56] duration metric: took 19.322059ms WaitForService to wait for kubelet
	I1025 09:34:50.323280  200380 kubeadm.go:586] duration metric: took 6.476787133s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:34:50.323315  200380 node_conditions.go:102] verifying NodePressure condition ...
	I1025 09:34:50.327067  200380 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 09:34:50.327147  200380 node_conditions.go:123] node cpu capacity is 2
	I1025 09:34:50.327173  200380 node_conditions.go:105] duration metric: took 3.836888ms to run NodePressure ...
	I1025 09:34:50.327201  200380 start.go:241] waiting for startup goroutines ...
	I1025 09:34:50.327233  200380 start.go:246] waiting for cluster config update ...
	I1025 09:34:50.327263  200380 start.go:255] writing updated cluster config ...
	I1025 09:34:50.327607  200380 ssh_runner.go:195] Run: rm -f paused
	I1025 09:34:50.331557  200380 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:34:50.336690  200380 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vgz5x" in "kube-system" namespace to be "Ready" or be gone ...
	W1025 09:34:47.879354  197465 pod_ready.go:104] pod "coredns-66bc5c9577-b266v" is not "Ready", error: <nil>
	W1025 09:34:50.378748  197465 pod_ready.go:104] pod "coredns-66bc5c9577-b266v" is not "Ready", error: <nil>
	W1025 09:34:52.379416  197465 pod_ready.go:104] pod "coredns-66bc5c9577-b266v" is not "Ready", error: <nil>
	I1025 09:34:53.884708  197465 pod_ready.go:94] pod "coredns-66bc5c9577-b266v" is "Ready"
	I1025 09:34:53.884733  197465 pod_ready.go:86] duration metric: took 31.511502149s for pod "coredns-66bc5c9577-b266v" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:34:53.888695  197465 pod_ready.go:83] waiting for pod "etcd-no-preload-179869" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:34:53.896110  197465 pod_ready.go:94] pod "etcd-no-preload-179869" is "Ready"
	I1025 09:34:53.896133  197465 pod_ready.go:86] duration metric: took 7.415311ms for pod "etcd-no-preload-179869" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:34:53.905179  197465 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-179869" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:34:53.912774  197465 pod_ready.go:94] pod "kube-apiserver-no-preload-179869" is "Ready"
	I1025 09:34:53.912841  197465 pod_ready.go:86] duration metric: took 7.637435ms for pod "kube-apiserver-no-preload-179869" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:34:53.917034  197465 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-179869" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:34:54.078579  197465 pod_ready.go:94] pod "kube-controller-manager-no-preload-179869" is "Ready"
	I1025 09:34:54.078611  197465 pod_ready.go:86] duration metric: took 161.455743ms for pod "kube-controller-manager-no-preload-179869" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:34:54.277053  197465 pod_ready.go:83] waiting for pod "kube-proxy-7xf9w" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:34:54.677754  197465 pod_ready.go:94] pod "kube-proxy-7xf9w" is "Ready"
	I1025 09:34:54.677785  197465 pod_ready.go:86] duration metric: took 400.703526ms for pod "kube-proxy-7xf9w" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:34:54.877322  197465 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-179869" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:34:55.276915  197465 pod_ready.go:94] pod "kube-scheduler-no-preload-179869" is "Ready"
	I1025 09:34:55.276952  197465 pod_ready.go:86] duration metric: took 399.602972ms for pod "kube-scheduler-no-preload-179869" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:34:55.276965  197465 pod_ready.go:40] duration metric: took 32.908111194s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:34:55.361914  197465 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1025 09:34:55.369382  197465 out.go:179] * Done! kubectl is now configured to use "no-preload-179869" cluster and "default" namespace by default
	W1025 09:34:52.344696  200380 pod_ready.go:104] pod "coredns-66bc5c9577-vgz5x" is not "Ready", error: <nil>
	W1025 09:34:54.843443  200380 pod_ready.go:104] pod "coredns-66bc5c9577-vgz5x" is not "Ready", error: <nil>
	W1025 09:34:57.342552  200380 pod_ready.go:104] pod "coredns-66bc5c9577-vgz5x" is not "Ready", error: <nil>
	W1025 09:34:59.342806  200380 pod_ready.go:104] pod "coredns-66bc5c9577-vgz5x" is not "Ready", error: <nil>
	W1025 09:35:01.343280  200380 pod_ready.go:104] pod "coredns-66bc5c9577-vgz5x" is not "Ready", error: <nil>
	W1025 09:35:03.349628  200380 pod_ready.go:104] pod "coredns-66bc5c9577-vgz5x" is not "Ready", error: <nil>
	W1025 09:35:05.846315  200380 pod_ready.go:104] pod "coredns-66bc5c9577-vgz5x" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 25 09:34:53 no-preload-179869 crio[652]: time="2025-10-25T09:34:53.03547616Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=e4a28540-c825-44f0-92ba-1619d4934980 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:34:53 no-preload-179869 crio[652]: time="2025-10-25T09:34:53.03650704Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=b7fb1484-b4fe-45a6-9272-a24f66ac77fb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:34:53 no-preload-179869 crio[652]: time="2025-10-25T09:34:53.036618131Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:34:53 no-preload-179869 crio[652]: time="2025-10-25T09:34:53.047304941Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:34:53 no-preload-179869 crio[652]: time="2025-10-25T09:34:53.04749202Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/0533d99364909fb751cc483df338600ec8fd4e4b1c59841babd91e4a1faf2716/merged/etc/passwd: no such file or directory"
	Oct 25 09:34:53 no-preload-179869 crio[652]: time="2025-10-25T09:34:53.047523544Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/0533d99364909fb751cc483df338600ec8fd4e4b1c59841babd91e4a1faf2716/merged/etc/group: no such file or directory"
	Oct 25 09:34:53 no-preload-179869 crio[652]: time="2025-10-25T09:34:53.047802186Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:34:53 no-preload-179869 crio[652]: time="2025-10-25T09:34:53.074366433Z" level=info msg="Created container 02492f2e14bb4c99de4a34ac3755e5a50a8595bf086464df913659f910bc608c: kube-system/storage-provisioner/storage-provisioner" id=b7fb1484-b4fe-45a6-9272-a24f66ac77fb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:34:53 no-preload-179869 crio[652]: time="2025-10-25T09:34:53.082471746Z" level=info msg="Starting container: 02492f2e14bb4c99de4a34ac3755e5a50a8595bf086464df913659f910bc608c" id=b196c27f-e89f-4fb6-82ee-4f8ff4dcebe0 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:34:53 no-preload-179869 crio[652]: time="2025-10-25T09:34:53.085741866Z" level=info msg="Started container" PID=1628 containerID=02492f2e14bb4c99de4a34ac3755e5a50a8595bf086464df913659f910bc608c description=kube-system/storage-provisioner/storage-provisioner id=b196c27f-e89f-4fb6-82ee-4f8ff4dcebe0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=dc4d4670b70d2be7128433b293eb9faef9714c87e169e92e3e7e32b50e378974
	Oct 25 09:35:02 no-preload-179869 crio[652]: time="2025-10-25T09:35:02.127107074Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 09:35:02 no-preload-179869 crio[652]: time="2025-10-25T09:35:02.132274505Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 09:35:02 no-preload-179869 crio[652]: time="2025-10-25T09:35:02.132493363Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 09:35:02 no-preload-179869 crio[652]: time="2025-10-25T09:35:02.132574439Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 09:35:02 no-preload-179869 crio[652]: time="2025-10-25T09:35:02.137226647Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 09:35:02 no-preload-179869 crio[652]: time="2025-10-25T09:35:02.13773853Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 09:35:02 no-preload-179869 crio[652]: time="2025-10-25T09:35:02.138734875Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 09:35:02 no-preload-179869 crio[652]: time="2025-10-25T09:35:02.146615069Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 09:35:02 no-preload-179869 crio[652]: time="2025-10-25T09:35:02.147383587Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 09:35:02 no-preload-179869 crio[652]: time="2025-10-25T09:35:02.147486563Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 09:35:02 no-preload-179869 crio[652]: time="2025-10-25T09:35:02.152750889Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 09:35:02 no-preload-179869 crio[652]: time="2025-10-25T09:35:02.153103238Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 09:35:02 no-preload-179869 crio[652]: time="2025-10-25T09:35:02.15335472Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 09:35:02 no-preload-179869 crio[652]: time="2025-10-25T09:35:02.158355216Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 09:35:02 no-preload-179869 crio[652]: time="2025-10-25T09:35:02.158512207Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	02492f2e14bb4       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           16 seconds ago      Running             storage-provisioner         2                   dc4d4670b70d2       storage-provisioner                          kube-system
	0f5c0e42589c0       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           21 seconds ago      Exited              dashboard-metrics-scraper   2                   07a179dc2dc80       dashboard-metrics-scraper-6ffb444bf9-xgdkt   kubernetes-dashboard
	1fbe8ea6e84e4       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   36 seconds ago      Running             kubernetes-dashboard        0                   cc6658b453b18       kubernetes-dashboard-855c9754f9-mfm5d        kubernetes-dashboard
	df3feec3e7122       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           48 seconds ago      Running             coredns                     1                   30d5c98639ae4       coredns-66bc5c9577-b266v                     kube-system
	9c04ac65697af       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           48 seconds ago      Running             kindnet-cni                 1                   4e8d390cdbce9       kindnet-qjcqv                                kube-system
	6510a34836626       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           48 seconds ago      Exited              storage-provisioner         1                   dc4d4670b70d2       storage-provisioner                          kube-system
	5bbcaa7d690b6       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           48 seconds ago      Running             busybox                     1                   ff10f920138e9       busybox                                      default
	184a1a0c95fe5       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           48 seconds ago      Running             kube-proxy                  1                   09d2a9d4f4c99       kube-proxy-7xf9w                             kube-system
	f5bada5085d9f       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           54 seconds ago      Running             etcd                        1                   f80c76cb3e306       etcd-no-preload-179869                       kube-system
	322d6c0123c7e       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           54 seconds ago      Running             kube-apiserver              1                   c43ff245bbc4d       kube-apiserver-no-preload-179869             kube-system
	e7e2e32307ca9       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           54 seconds ago      Running             kube-controller-manager     1                   e0ea882d7de77       kube-controller-manager-no-preload-179869    kube-system
	a6cb8feabf010       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           54 seconds ago      Running             kube-scheduler              1                   b602f5ff39883       kube-scheduler-no-preload-179869             kube-system
	
	
	==> coredns [df3feec3e7122e0051a6b34445e67aedc3b8e22118eabef976d15e9b8e99540c] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49163 - 50799 "HINFO IN 3930737424186449920.7220986438533358864. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.035542301s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-179869
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-179869
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3de5ce1075e776fa55a26fe8396669cc53a4373
	                    minikube.k8s.io/name=no-preload-179869
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_33_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:33:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-179869
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:35:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:34:50 +0000   Sat, 25 Oct 2025 09:33:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:34:50 +0000   Sat, 25 Oct 2025 09:33:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:34:50 +0000   Sat, 25 Oct 2025 09:33:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:34:50 +0000   Sat, 25 Oct 2025 09:33:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-179869
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                ea4ee067-c337-4055-9d54-e11f82ef0c5b
	  Boot ID:                    89aa6167-e700-462e-8969-7739a9f33dc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 coredns-66bc5c9577-b266v                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     108s
	  kube-system                 etcd-no-preload-179869                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         113s
	  kube-system                 kindnet-qjcqv                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-no-preload-179869              250m (12%)    0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-no-preload-179869     200m (10%)    0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-proxy-7xf9w                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-no-preload-179869              100m (5%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-xgdkt    0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-mfm5d         0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 104s                 kube-proxy       
	  Normal   Starting                 48s                  kube-proxy       
	  Normal   Starting                 2m4s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m4s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m3s (x8 over 2m3s)  kubelet          Node no-preload-179869 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m3s (x8 over 2m3s)  kubelet          Node no-preload-179869 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m3s (x8 over 2m3s)  kubelet          Node no-preload-179869 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    113s                 kubelet          Node no-preload-179869 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 113s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  113s                 kubelet          Node no-preload-179869 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     113s                 kubelet          Node no-preload-179869 status is now: NodeHasSufficientPID
	  Normal   Starting                 113s                 kubelet          Starting kubelet.
	  Normal   RegisteredNode           109s                 node-controller  Node no-preload-179869 event: Registered Node no-preload-179869 in Controller
	  Normal   NodeReady                91s                  kubelet          Node no-preload-179869 status is now: NodeReady
	  Normal   Starting                 56s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 56s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  56s (x8 over 56s)    kubelet          Node no-preload-179869 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    56s (x8 over 56s)    kubelet          Node no-preload-179869 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     56s (x8 over 56s)    kubelet          Node no-preload-179869 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           46s                  node-controller  Node no-preload-179869 event: Registered Node no-preload-179869 in Controller
	
	
	==> dmesg <==
	[Oct25 09:11] overlayfs: idmapped layers are currently not supported
	[Oct25 09:13] overlayfs: idmapped layers are currently not supported
	[ +18.632418] overlayfs: idmapped layers are currently not supported
	[ +13.271347] overlayfs: idmapped layers are currently not supported
	[Oct25 09:14] overlayfs: idmapped layers are currently not supported
	[Oct25 09:15] overlayfs: idmapped layers are currently not supported
	[ +40.253798] overlayfs: idmapped layers are currently not supported
	[Oct25 09:16] overlayfs: idmapped layers are currently not supported
	[Oct25 09:17] overlayfs: idmapped layers are currently not supported
	[Oct25 09:18] overlayfs: idmapped layers are currently not supported
	[  +9.191019] hrtimer: interrupt took 13462946 ns
	[Oct25 09:20] overlayfs: idmapped layers are currently not supported
	[Oct25 09:22] overlayfs: idmapped layers are currently not supported
	[Oct25 09:23] overlayfs: idmapped layers are currently not supported
	[Oct25 09:24] overlayfs: idmapped layers are currently not supported
	[Oct25 09:28] overlayfs: idmapped layers are currently not supported
	[ +37.283444] overlayfs: idmapped layers are currently not supported
	[Oct25 09:29] overlayfs: idmapped layers are currently not supported
	[ +38.328802] overlayfs: idmapped layers are currently not supported
	[Oct25 09:30] overlayfs: idmapped layers are currently not supported
	[Oct25 09:31] overlayfs: idmapped layers are currently not supported
	[Oct25 09:32] overlayfs: idmapped layers are currently not supported
	[Oct25 09:33] overlayfs: idmapped layers are currently not supported
	[Oct25 09:34] overlayfs: idmapped layers are currently not supported
	[ +28.289706] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [f5bada5085d9fc38a05b5c28df63b45e3d7d2804b79eb6e9472ffcfe51192fcf] <==
	{"level":"warn","ts":"2025-10-25T09:34:18.243619Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:18.267882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:18.293613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:18.311749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:18.326743Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:18.347606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:18.362418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:18.378503Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:18.393536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:18.414736Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:18.434472Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:18.453519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:18.471038Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:18.498052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:18.516095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:18.533551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:18.544346Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:18.561943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:18.601883Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:18.629918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:18.650762Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:18.674488Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:18.697539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:18.715830Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:18.782274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56160","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:35:10 up  1:17,  0 user,  load average: 4.03, 3.66, 2.92
	Linux no-preload-179869 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9c04ac65697af3509d6dea534349d6c8fb0a3f0c0d513b13f56f5066fb198d68] <==
	I1025 09:34:21.939388       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 09:34:21.939696       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1025 09:34:21.939827       1 main.go:148] setting mtu 1500 for CNI 
	I1025 09:34:21.939840       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 09:34:21.939854       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T09:34:22Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 09:34:22.121459       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 09:34:22.121545       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 09:34:22.121581       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 09:34:22.121773       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1025 09:34:52.126381       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1025 09:34:52.126384       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1025 09:34:52.126503       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1025 09:34:52.126517       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1025 09:34:53.522538       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 09:34:53.522574       1 metrics.go:72] Registering metrics
	I1025 09:34:53.522695       1 controller.go:711] "Syncing nftables rules"
	I1025 09:35:02.126256       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1025 09:35:02.126609       1 main.go:301] handling current node
	
	
	==> kube-apiserver [322d6c0123c7e7fdc2849fe7f0af01136e262450452ab009ce0b5204f1aa3c61] <==
	I1025 09:34:19.887472       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 09:34:19.910082       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1025 09:34:19.920434       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1025 09:34:19.920591       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1025 09:34:19.926140       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:34:19.947629       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1025 09:34:19.951919       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1025 09:34:19.970310       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1025 09:34:19.970330       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1025 09:34:19.976895       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1025 09:34:19.977132       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1025 09:34:19.977182       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 09:34:19.981128       1 cache.go:39] Caches are synced for autoregister controller
	I1025 09:34:19.997411       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1025 09:34:20.477244       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 09:34:20.726406       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 09:34:20.863691       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 09:34:21.221250       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 09:34:21.351576       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 09:34:21.416663       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 09:34:21.746456       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.153.58"}
	I1025 09:34:21.791672       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.195.199"}
	I1025 09:34:24.229687       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 09:34:24.532410       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 09:34:24.640833       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [e7e2e32307ca96424361bfd29933a486f201599a8ece9c4103b9b800c7dc2e1e] <==
	I1025 09:34:24.101520       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1025 09:34:24.101535       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1025 09:34:24.102660       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1025 09:34:24.105565       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:34:24.121114       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1025 09:34:24.121274       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1025 09:34:24.121596       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1025 09:34:24.122067       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:34:24.122189       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 09:34:24.122239       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 09:34:24.122726       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 09:34:24.123308       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1025 09:34:24.124492       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1025 09:34:24.125409       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1025 09:34:24.143392       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1025 09:34:24.146685       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1025 09:34:24.153316       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1025 09:34:24.156460       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1025 09:34:24.156588       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1025 09:34:24.156669       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-179869"
	I1025 09:34:24.156721       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1025 09:34:24.159403       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1025 09:34:24.162966       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1025 09:34:24.165671       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:34:24.172458       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	
	
	==> kube-proxy [184a1a0c95fe5d8445d3fb8d8272b28b468161702f9ecd979fa5d4af4a93e122] <==
	I1025 09:34:21.580959       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:34:22.064717       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:34:22.165133       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:34:22.165177       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1025 09:34:22.165275       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:34:22.198249       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:34:22.198304       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:34:22.202681       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:34:22.203112       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:34:22.203357       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:34:22.204697       1 config.go:200] "Starting service config controller"
	I1025 09:34:22.204749       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:34:22.204770       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:34:22.204775       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:34:22.204786       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:34:22.204790       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:34:22.205623       1 config.go:309] "Starting node config controller"
	I1025 09:34:22.205672       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:34:22.205703       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:34:22.305103       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 09:34:22.305140       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 09:34:22.305210       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a6cb8feabf010cd74e1dfdfebd8f0990900f05746d51f737f324f6b0f0b15aee] <==
	I1025 09:34:18.360662       1 serving.go:386] Generated self-signed cert in-memory
	W1025 09:34:19.838364       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 09:34:19.838397       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 09:34:19.838415       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 09:34:19.838423       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 09:34:19.952887       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 09:34:19.952916       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:34:19.956867       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:34:19.956915       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:34:19.958607       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 09:34:19.958869       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 09:34:20.058263       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 09:34:24 no-preload-179869 kubelet[771]: I1025 09:34:24.733148     771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhw4n\" (UniqueName: \"kubernetes.io/projected/2a0c611c-213f-4b5c-a9fc-5df39eb29500-kube-api-access-lhw4n\") pod \"dashboard-metrics-scraper-6ffb444bf9-xgdkt\" (UID: \"2a0c611c-213f-4b5c-a9fc-5df39eb29500\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xgdkt"
	Oct 25 09:34:24 no-preload-179869 kubelet[771]: I1025 09:34:24.733171     771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jshj6\" (UniqueName: \"kubernetes.io/projected/a50d6b41-01f1-46ca-bcfd-0d1aefe83b4a-kube-api-access-jshj6\") pod \"kubernetes-dashboard-855c9754f9-mfm5d\" (UID: \"a50d6b41-01f1-46ca-bcfd-0d1aefe83b4a\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-mfm5d"
	Oct 25 09:34:24 no-preload-179869 kubelet[771]: I1025 09:34:24.733251     771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a50d6b41-01f1-46ca-bcfd-0d1aefe83b4a-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-mfm5d\" (UID: \"a50d6b41-01f1-46ca-bcfd-0d1aefe83b4a\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-mfm5d"
	Oct 25 09:34:24 no-preload-179869 kubelet[771]: W1025 09:34:24.994651     771 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/021c28390d46e140cb149a266cb60178228c9cf40f85185de0d05f970fd0ddea/crio-cc6658b453b1849e1ddb529b759cf9c01ce3a6a36515b0448171d6d255b3fcc8 WatchSource:0}: Error finding container cc6658b453b1849e1ddb529b759cf9c01ce3a6a36515b0448171d6d255b3fcc8: Status 404 returned error can't find the container with id cc6658b453b1849e1ddb529b759cf9c01ce3a6a36515b0448171d6d255b3fcc8
	Oct 25 09:34:28 no-preload-179869 kubelet[771]: I1025 09:34:28.929926     771 scope.go:117] "RemoveContainer" containerID="f6d0053f55f87e3e8e6778a9c496fe645d6a6f7fb0b158689b75bd4a3f3383eb"
	Oct 25 09:34:29 no-preload-179869 kubelet[771]: I1025 09:34:29.934479     771 scope.go:117] "RemoveContainer" containerID="f6d0053f55f87e3e8e6778a9c496fe645d6a6f7fb0b158689b75bd4a3f3383eb"
	Oct 25 09:34:29 no-preload-179869 kubelet[771]: I1025 09:34:29.934745     771 scope.go:117] "RemoveContainer" containerID="9d87a0d35fd4c9ce5cdf73c1107b1d02e08a1e5de0efa8d33b9ae497e90b594f"
	Oct 25 09:34:29 no-preload-179869 kubelet[771]: E1025 09:34:29.934885     771 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-xgdkt_kubernetes-dashboard(2a0c611c-213f-4b5c-a9fc-5df39eb29500)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xgdkt" podUID="2a0c611c-213f-4b5c-a9fc-5df39eb29500"
	Oct 25 09:34:30 no-preload-179869 kubelet[771]: I1025 09:34:30.938978     771 scope.go:117] "RemoveContainer" containerID="9d87a0d35fd4c9ce5cdf73c1107b1d02e08a1e5de0efa8d33b9ae497e90b594f"
	Oct 25 09:34:30 no-preload-179869 kubelet[771]: E1025 09:34:30.939152     771 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-xgdkt_kubernetes-dashboard(2a0c611c-213f-4b5c-a9fc-5df39eb29500)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xgdkt" podUID="2a0c611c-213f-4b5c-a9fc-5df39eb29500"
	Oct 25 09:34:34 no-preload-179869 kubelet[771]: I1025 09:34:34.916692     771 scope.go:117] "RemoveContainer" containerID="9d87a0d35fd4c9ce5cdf73c1107b1d02e08a1e5de0efa8d33b9ae497e90b594f"
	Oct 25 09:34:34 no-preload-179869 kubelet[771]: E1025 09:34:34.916933     771 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-xgdkt_kubernetes-dashboard(2a0c611c-213f-4b5c-a9fc-5df39eb29500)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xgdkt" podUID="2a0c611c-213f-4b5c-a9fc-5df39eb29500"
	Oct 25 09:34:48 no-preload-179869 kubelet[771]: I1025 09:34:48.791884     771 scope.go:117] "RemoveContainer" containerID="9d87a0d35fd4c9ce5cdf73c1107b1d02e08a1e5de0efa8d33b9ae497e90b594f"
	Oct 25 09:34:49 no-preload-179869 kubelet[771]: I1025 09:34:49.001892     771 scope.go:117] "RemoveContainer" containerID="9d87a0d35fd4c9ce5cdf73c1107b1d02e08a1e5de0efa8d33b9ae497e90b594f"
	Oct 25 09:34:49 no-preload-179869 kubelet[771]: I1025 09:34:49.002279     771 scope.go:117] "RemoveContainer" containerID="0f5c0e42589c00e20e749547249c04c97cb314fd245734ef647815f3c008a16d"
	Oct 25 09:34:49 no-preload-179869 kubelet[771]: E1025 09:34:49.002460     771 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-xgdkt_kubernetes-dashboard(2a0c611c-213f-4b5c-a9fc-5df39eb29500)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xgdkt" podUID="2a0c611c-213f-4b5c-a9fc-5df39eb29500"
	Oct 25 09:34:49 no-preload-179869 kubelet[771]: I1025 09:34:49.037512     771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-mfm5d" podStartSLOduration=16.810966589 podStartE2EDuration="25.037495016s" podCreationTimestamp="2025-10-25 09:34:24 +0000 UTC" firstStartedPulling="2025-10-25 09:34:25.00239123 +0000 UTC m=+10.500899604" lastFinishedPulling="2025-10-25 09:34:33.228919665 +0000 UTC m=+18.727428031" observedRunningTime="2025-10-25 09:34:33.965234896 +0000 UTC m=+19.463743270" watchObservedRunningTime="2025-10-25 09:34:49.037495016 +0000 UTC m=+34.536003382"
	Oct 25 09:34:53 no-preload-179869 kubelet[771]: I1025 09:34:53.033265     771 scope.go:117] "RemoveContainer" containerID="6510a348366267527a1c6d9da08ba9302f7233cf6baf4feee121e3108a4111f9"
	Oct 25 09:34:54 no-preload-179869 kubelet[771]: I1025 09:34:54.917036     771 scope.go:117] "RemoveContainer" containerID="0f5c0e42589c00e20e749547249c04c97cb314fd245734ef647815f3c008a16d"
	Oct 25 09:34:54 no-preload-179869 kubelet[771]: E1025 09:34:54.917701     771 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-xgdkt_kubernetes-dashboard(2a0c611c-213f-4b5c-a9fc-5df39eb29500)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xgdkt" podUID="2a0c611c-213f-4b5c-a9fc-5df39eb29500"
	Oct 25 09:35:06 no-preload-179869 kubelet[771]: I1025 09:35:06.789906     771 scope.go:117] "RemoveContainer" containerID="0f5c0e42589c00e20e749547249c04c97cb314fd245734ef647815f3c008a16d"
	Oct 25 09:35:06 no-preload-179869 kubelet[771]: E1025 09:35:06.790558     771 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-xgdkt_kubernetes-dashboard(2a0c611c-213f-4b5c-a9fc-5df39eb29500)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xgdkt" podUID="2a0c611c-213f-4b5c-a9fc-5df39eb29500"
	Oct 25 09:35:07 no-preload-179869 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 09:35:07 no-preload-179869 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 09:35:07 no-preload-179869 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [1fbe8ea6e84e49a7a97e66e41ae18a032f04d7efd3715340156d52f83e56d5f9] <==
	2025/10/25 09:34:33 Using namespace: kubernetes-dashboard
	2025/10/25 09:34:33 Using in-cluster config to connect to apiserver
	2025/10/25 09:34:33 Using secret token for csrf signing
	2025/10/25 09:34:33 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/25 09:34:33 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/25 09:34:33 Successful initial request to the apiserver, version: v1.34.1
	2025/10/25 09:34:33 Generating JWE encryption key
	2025/10/25 09:34:33 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/25 09:34:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/25 09:34:33 Initializing JWE encryption key from synchronized object
	2025/10/25 09:34:33 Creating in-cluster Sidecar client
	2025/10/25 09:34:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 09:34:33 Serving insecurely on HTTP port: 9090
	2025/10/25 09:35:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 09:34:33 Starting overwatch
	
	
	==> storage-provisioner [02492f2e14bb4c99de4a34ac3755e5a50a8595bf086464df913659f910bc608c] <==
	I1025 09:34:53.098738       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 09:34:53.114871       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 09:34:53.115664       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1025 09:34:53.127536       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:34:56.583040       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:35:00.843756       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:35:04.443154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:35:07.496219       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:35:10.518542       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:35:10.529743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 09:35:10.530528       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6556c1b4-f5ed-4c56-8bcf-e108cc1d8bad", APIVersion:"v1", ResourceVersion:"678", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-179869_f5bf9a2f-96af-4a86-97ee-47a4cf570d3c became leader
	I1025 09:35:10.530575       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 09:35:10.530664       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-179869_f5bf9a2f-96af-4a86-97ee-47a4cf570d3c!
	W1025 09:35:10.540133       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:35:10.546304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [6510a348366267527a1c6d9da08ba9302f7233cf6baf4feee121e3108a4111f9] <==
	I1025 09:34:22.026082       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1025 09:34:52.028250       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-179869 -n no-preload-179869
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-179869 -n no-preload-179869: exit status 2 (370.785675ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-179869 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-179869
helpers_test.go:243: (dbg) docker inspect no-preload-179869:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "021c28390d46e140cb149a266cb60178228c9cf40f85185de0d05f970fd0ddea",
	        "Created": "2025-10-25T09:32:25.431032619Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 197592,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:34:07.103924817Z",
	            "FinishedAt": "2025-10-25T09:34:06.32180239Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/021c28390d46e140cb149a266cb60178228c9cf40f85185de0d05f970fd0ddea/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/021c28390d46e140cb149a266cb60178228c9cf40f85185de0d05f970fd0ddea/hostname",
	        "HostsPath": "/var/lib/docker/containers/021c28390d46e140cb149a266cb60178228c9cf40f85185de0d05f970fd0ddea/hosts",
	        "LogPath": "/var/lib/docker/containers/021c28390d46e140cb149a266cb60178228c9cf40f85185de0d05f970fd0ddea/021c28390d46e140cb149a266cb60178228c9cf40f85185de0d05f970fd0ddea-json.log",
	        "Name": "/no-preload-179869",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-179869:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-179869",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "021c28390d46e140cb149a266cb60178228c9cf40f85185de0d05f970fd0ddea",
	                "LowerDir": "/var/lib/docker/overlay2/81e00092661e10c44ffb145286208642057ce877d4a86b73f561cb203e788f89-init/diff:/var/lib/docker/overlay2/cef79fc16ca3e257f9c9390d34fae091a7f7fa219913b2606caf1922ea50ed93/diff",
	                "MergedDir": "/var/lib/docker/overlay2/81e00092661e10c44ffb145286208642057ce877d4a86b73f561cb203e788f89/merged",
	                "UpperDir": "/var/lib/docker/overlay2/81e00092661e10c44ffb145286208642057ce877d4a86b73f561cb203e788f89/diff",
	                "WorkDir": "/var/lib/docker/overlay2/81e00092661e10c44ffb145286208642057ce877d4a86b73f561cb203e788f89/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-179869",
	                "Source": "/var/lib/docker/volumes/no-preload-179869/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-179869",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-179869",
	                "name.minikube.sigs.k8s.io": "no-preload-179869",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3e152feee922a9f07e18b51818e3424a4b11cf77c51f7ec53d27ea402d13daa5",
	            "SandboxKey": "/var/run/docker/netns/3e152feee922",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-179869": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "de:4c:72:bf:56:55",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ff99d2418ad390d8ccdf5911c4bca3c6d1626ffae4866e35866344c13c51df93",
	                    "EndpointID": "fca1557c35edda6325f1ba867b06b369dc934809174321ad250ddd24b25e5ea5",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-179869",
	                        "021c28390d46"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-179869 -n no-preload-179869
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-179869 -n no-preload-179869: exit status 2 (362.470947ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-179869 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-179869 logs -n 25: (1.30490939s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cert-options-483456 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-483456    │ jenkins │ v1.37.0 │ 25 Oct 25 09:29 UTC │ 25 Oct 25 09:29 UTC │
	│ delete  │ -p cert-options-483456                                                                                                                                                                                                                        │ cert-options-483456    │ jenkins │ v1.37.0 │ 25 Oct 25 09:29 UTC │ 25 Oct 25 09:29 UTC │
	│ start   │ -p old-k8s-version-881642 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-881642 │ jenkins │ v1.37.0 │ 25 Oct 25 09:29 UTC │ 25 Oct 25 09:30 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-881642 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-881642 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │                     │
	│ stop    │ -p old-k8s-version-881642 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-881642 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │ 25 Oct 25 09:31 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-881642 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-881642 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │ 25 Oct 25 09:31 UTC │
	│ start   │ -p old-k8s-version-881642 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-881642 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │ 25 Oct 25 09:32 UTC │
	│ image   │ old-k8s-version-881642 image list --format=json                                                                                                                                                                                               │ old-k8s-version-881642 │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:32 UTC │
	│ pause   │ -p old-k8s-version-881642 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-881642 │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │                     │
	│ start   │ -p cert-expiration-440252 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-440252 │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:32 UTC │
	│ delete  │ -p old-k8s-version-881642                                                                                                                                                                                                                     │ old-k8s-version-881642 │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:32 UTC │
	│ delete  │ -p old-k8s-version-881642                                                                                                                                                                                                                     │ old-k8s-version-881642 │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:32 UTC │
	│ start   │ -p no-preload-179869 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-179869      │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:33 UTC │
	│ delete  │ -p cert-expiration-440252                                                                                                                                                                                                                     │ cert-expiration-440252 │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:32 UTC │
	│ start   │ -p embed-certs-173264 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-173264     │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:34 UTC │
	│ addons  │ enable metrics-server -p no-preload-179869 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-179869      │ jenkins │ v1.37.0 │ 25 Oct 25 09:33 UTC │                     │
	│ stop    │ -p no-preload-179869 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-179869      │ jenkins │ v1.37.0 │ 25 Oct 25 09:33 UTC │ 25 Oct 25 09:34 UTC │
	│ addons  │ enable dashboard -p no-preload-179869 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-179869      │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │ 25 Oct 25 09:34 UTC │
	│ start   │ -p no-preload-179869 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-179869      │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │ 25 Oct 25 09:34 UTC │
	│ addons  │ enable metrics-server -p embed-certs-173264 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-173264     │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │                     │
	│ stop    │ -p embed-certs-173264 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-173264     │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │ 25 Oct 25 09:34 UTC │
	│ addons  │ enable dashboard -p embed-certs-173264 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-173264     │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │ 25 Oct 25 09:34 UTC │
	│ start   │ -p embed-certs-173264 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-173264     │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │                     │
	│ image   │ no-preload-179869 image list --format=json                                                                                                                                                                                                    │ no-preload-179869      │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
	│ pause   │ -p no-preload-179869 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-179869      │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:34:35
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:34:35.900519  200380 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:34:35.900772  200380 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:34:35.900800  200380 out.go:374] Setting ErrFile to fd 2...
	I1025 09:34:35.900820  200380 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:34:35.901127  200380 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-2312/.minikube/bin
	I1025 09:34:35.901562  200380 out.go:368] Setting JSON to false
	I1025 09:34:35.902662  200380 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4627,"bootTime":1761380249,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1025 09:34:35.902768  200380 start.go:141] virtualization:  
	I1025 09:34:35.905585  200380 out.go:179] * [embed-certs-173264] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 09:34:35.909134  200380 out.go:179]   - MINIKUBE_LOCATION=21796
	I1025 09:34:35.909224  200380 notify.go:220] Checking for updates...
	I1025 09:34:35.914662  200380 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:34:35.917469  200380 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21796-2312/kubeconfig
	I1025 09:34:35.920253  200380 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-2312/.minikube
	I1025 09:34:35.923040  200380 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 09:34:35.925821  200380 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:34:35.929118  200380 config.go:182] Loaded profile config "embed-certs-173264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:34:35.929788  200380 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:34:35.954907  200380 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 09:34:35.955025  200380 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:34:36.028788  200380 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-25 09:34:36.01783765 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:34:36.028907  200380 docker.go:318] overlay module found
	I1025 09:34:36.032238  200380 out.go:179] * Using the docker driver based on existing profile
	I1025 09:34:36.035022  200380 start.go:305] selected driver: docker
	I1025 09:34:36.035044  200380 start.go:925] validating driver "docker" against &{Name:embed-certs-173264 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-173264 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:34:36.035162  200380 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:34:36.036000  200380 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:34:36.095404  200380 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-25 09:34:36.085004839 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:34:36.095757  200380 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:34:36.095792  200380 cni.go:84] Creating CNI manager for ""
	I1025 09:34:36.095851  200380 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:34:36.095891  200380 start.go:349] cluster config:
	{Name:embed-certs-173264 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-173264 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:34:36.101058  200380 out.go:179] * Starting "embed-certs-173264" primary control-plane node in "embed-certs-173264" cluster
	I1025 09:34:36.103888  200380 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 09:34:36.106762  200380 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:34:36.109571  200380 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:34:36.109627  200380 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21796-2312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1025 09:34:36.109641  200380 cache.go:58] Caching tarball of preloaded images
	I1025 09:34:36.109666  200380 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:34:36.109727  200380 preload.go:233] Found /home/jenkins/minikube-integration/21796-2312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 09:34:36.109736  200380 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 09:34:36.109845  200380 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/embed-certs-173264/config.json ...
	I1025 09:34:36.133976  200380 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 09:34:36.134045  200380 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 09:34:36.134070  200380 cache.go:232] Successfully downloaded all kic artifacts
	I1025 09:34:36.134100  200380 start.go:360] acquireMachinesLock for embed-certs-173264: {Name:mke81dcd321ea4fd5503be9a5895c5ebc5dee6a7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:34:36.134169  200380 start.go:364] duration metric: took 50.782µs to acquireMachinesLock for "embed-certs-173264"
	I1025 09:34:36.134190  200380 start.go:96] Skipping create...Using existing machine configuration
	I1025 09:34:36.134195  200380 fix.go:54] fixHost starting: 
	I1025 09:34:36.134454  200380 cli_runner.go:164] Run: docker container inspect embed-certs-173264 --format={{.State.Status}}
	I1025 09:34:36.151089  200380 fix.go:112] recreateIfNeeded on embed-certs-173264: state=Stopped err=<nil>
	W1025 09:34:36.151121  200380 fix.go:138] unexpected machine state, will restart: <nil>
	W1025 09:34:33.878806  197465 pod_ready.go:104] pod "coredns-66bc5c9577-b266v" is not "Ready", error: <nil>
	W1025 09:34:35.880265  197465 pod_ready.go:104] pod "coredns-66bc5c9577-b266v" is not "Ready", error: <nil>
	I1025 09:34:36.154183  200380 out.go:252] * Restarting existing docker container for "embed-certs-173264" ...
	I1025 09:34:36.154275  200380 cli_runner.go:164] Run: docker start embed-certs-173264
	I1025 09:34:36.401090  200380 cli_runner.go:164] Run: docker container inspect embed-certs-173264 --format={{.State.Status}}
	I1025 09:34:36.428711  200380 kic.go:430] container "embed-certs-173264" state is running.
	I1025 09:34:36.429237  200380 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-173264
	I1025 09:34:36.450712  200380 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/embed-certs-173264/config.json ...
	I1025 09:34:36.451069  200380 machine.go:93] provisionDockerMachine start ...
	I1025 09:34:36.451343  200380 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-173264
	I1025 09:34:36.472303  200380 main.go:141] libmachine: Using SSH client type: native
	I1025 09:34:36.472704  200380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1025 09:34:36.472717  200380 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:34:36.473461  200380 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1025 09:34:39.625653  200380 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-173264
	
	I1025 09:34:39.625693  200380 ubuntu.go:182] provisioning hostname "embed-certs-173264"
	I1025 09:34:39.625755  200380 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-173264
	I1025 09:34:39.645026  200380 main.go:141] libmachine: Using SSH client type: native
	I1025 09:34:39.645345  200380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1025 09:34:39.645362  200380 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-173264 && echo "embed-certs-173264" | sudo tee /etc/hostname
	I1025 09:34:39.812340  200380 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-173264
	
	I1025 09:34:39.812431  200380 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-173264
	I1025 09:34:39.830647  200380 main.go:141] libmachine: Using SSH client type: native
	I1025 09:34:39.830959  200380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1025 09:34:39.830984  200380 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-173264' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-173264/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-173264' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:34:39.990344  200380 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:34:39.990371  200380 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21796-2312/.minikube CaCertPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21796-2312/.minikube}
	I1025 09:34:39.990410  200380 ubuntu.go:190] setting up certificates
	I1025 09:34:39.990419  200380 provision.go:84] configureAuth start
	I1025 09:34:39.990478  200380 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-173264
	I1025 09:34:40.032656  200380 provision.go:143] copyHostCerts
	I1025 09:34:40.032748  200380 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-2312/.minikube/ca.pem, removing ...
	I1025 09:34:40.032768  200380 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-2312/.minikube/ca.pem
	I1025 09:34:40.032867  200380 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21796-2312/.minikube/ca.pem (1082 bytes)
	I1025 09:34:40.033033  200380 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-2312/.minikube/cert.pem, removing ...
	I1025 09:34:40.033039  200380 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-2312/.minikube/cert.pem
	I1025 09:34:40.033069  200380 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21796-2312/.minikube/cert.pem (1123 bytes)
	I1025 09:34:40.033132  200380 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-2312/.minikube/key.pem, removing ...
	I1025 09:34:40.033136  200380 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-2312/.minikube/key.pem
	I1025 09:34:40.033159  200380 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21796-2312/.minikube/key.pem (1679 bytes)
	I1025 09:34:40.033213  200380 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21796-2312/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca-key.pem org=jenkins.embed-certs-173264 san=[127.0.0.1 192.168.85.2 embed-certs-173264 localhost minikube]
	I1025 09:34:40.312558  200380 provision.go:177] copyRemoteCerts
	I1025 09:34:40.312634  200380 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:34:40.312684  200380 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-173264
	I1025 09:34:40.334006  200380 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/embed-certs-173264/id_rsa Username:docker}
	I1025 09:34:40.441922  200380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1025 09:34:40.461686  200380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 09:34:40.481318  200380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 09:34:40.498826  200380 provision.go:87] duration metric: took 508.384602ms to configureAuth
	I1025 09:34:40.498896  200380 ubuntu.go:206] setting minikube options for container-runtime
	I1025 09:34:40.499108  200380 config.go:182] Loaded profile config "embed-certs-173264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:34:40.499244  200380 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-173264
	I1025 09:34:40.517648  200380 main.go:141] libmachine: Using SSH client type: native
	I1025 09:34:40.518093  200380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1025 09:34:40.518126  200380 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 09:34:40.854123  200380 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:34:40.854150  200380 machine.go:96] duration metric: took 4.403067865s to provisionDockerMachine
	I1025 09:34:40.854162  200380 start.go:293] postStartSetup for "embed-certs-173264" (driver="docker")
	I1025 09:34:40.854191  200380 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:34:40.854269  200380 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:34:40.854338  200380 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-173264
	I1025 09:34:40.878782  200380 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/embed-certs-173264/id_rsa Username:docker}
	W1025 09:34:38.380012  197465 pod_ready.go:104] pod "coredns-66bc5c9577-b266v" is not "Ready", error: <nil>
	W1025 09:34:40.882362  197465 pod_ready.go:104] pod "coredns-66bc5c9577-b266v" is not "Ready", error: <nil>
	I1025 09:34:40.982409  200380 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:34:40.985918  200380 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 09:34:40.985944  200380 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 09:34:40.985954  200380 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-2312/.minikube/addons for local assets ...
	I1025 09:34:40.986051  200380 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-2312/.minikube/files for local assets ...
	I1025 09:34:40.986130  200380 filesync.go:149] local asset: /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem -> 41102.pem in /etc/ssl/certs
	I1025 09:34:40.986237  200380 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 09:34:40.993739  200380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem --> /etc/ssl/certs/41102.pem (1708 bytes)
	I1025 09:34:41.014793  200380 start.go:296] duration metric: took 160.616504ms for postStartSetup
	I1025 09:34:41.014916  200380 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:34:41.015009  200380 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-173264
	I1025 09:34:41.033708  200380 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/embed-certs-173264/id_rsa Username:docker}
	I1025 09:34:41.134932  200380 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 09:34:41.139615  200380 fix.go:56] duration metric: took 5.005412757s for fixHost
	I1025 09:34:41.139645  200380 start.go:83] releasing machines lock for "embed-certs-173264", held for 5.005465427s
	I1025 09:34:41.139713  200380 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-173264
	I1025 09:34:41.157609  200380 ssh_runner.go:195] Run: cat /version.json
	I1025 09:34:41.157669  200380 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-173264
	I1025 09:34:41.157790  200380 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:34:41.157851  200380 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-173264
	I1025 09:34:41.175328  200380 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/embed-certs-173264/id_rsa Username:docker}
	I1025 09:34:41.177712  200380 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/embed-certs-173264/id_rsa Username:docker}
	I1025 09:34:41.370714  200380 ssh_runner.go:195] Run: systemctl --version
	I1025 09:34:41.377740  200380 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:34:41.416472  200380 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:34:41.420792  200380 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:34:41.420861  200380 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:34:41.429152  200380 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 09:34:41.429176  200380 start.go:495] detecting cgroup driver to use...
	I1025 09:34:41.429208  200380 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 09:34:41.429258  200380 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:34:41.444190  200380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:34:41.457399  200380 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:34:41.457495  200380 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:34:41.473396  200380 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:34:41.487187  200380 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:34:41.604246  200380 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:34:41.729466  200380 docker.go:234] disabling docker service ...
	I1025 09:34:41.729530  200380 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:34:41.745584  200380 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:34:41.758647  200380 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:34:41.876956  200380 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:34:41.998325  200380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:34:42.030826  200380 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:34:42.047185  200380 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 09:34:42.047325  200380 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:34:42.057090  200380 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 09:34:42.057165  200380 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:34:42.067156  200380 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:34:42.078330  200380 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:34:42.090118  200380 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:34:42.100367  200380 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:34:42.111713  200380 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:34:42.123569  200380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:34:42.134842  200380 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:34:42.144675  200380 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:34:42.154440  200380 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:34:42.296736  200380 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:34:42.440089  200380 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:34:42.440182  200380 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:34:42.444217  200380 start.go:563] Will wait 60s for crictl version
	I1025 09:34:42.444336  200380 ssh_runner.go:195] Run: which crictl
	I1025 09:34:42.448015  200380 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 09:34:42.477506  200380 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 09:34:42.477637  200380 ssh_runner.go:195] Run: crio --version
	I1025 09:34:42.511940  200380 ssh_runner.go:195] Run: crio --version
	I1025 09:34:42.543404  200380 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 09:34:42.546162  200380 cli_runner.go:164] Run: docker network inspect embed-certs-173264 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:34:42.562764  200380 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1025 09:34:42.566481  200380 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:34:42.575986  200380 kubeadm.go:883] updating cluster {Name:embed-certs-173264 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-173264 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 09:34:42.576099  200380 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:34:42.576152  200380 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:34:42.608587  200380 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:34:42.608619  200380 crio.go:433] Images already preloaded, skipping extraction
	I1025 09:34:42.608674  200380 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:34:42.637913  200380 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:34:42.637990  200380 cache_images.go:85] Images are preloaded, skipping loading
	I1025 09:34:42.638012  200380 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1025 09:34:42.638166  200380 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-173264 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-173264 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 09:34:42.638261  200380 ssh_runner.go:195] Run: crio config
	I1025 09:34:42.701396  200380 cni.go:84] Creating CNI manager for ""
	I1025 09:34:42.701430  200380 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:34:42.701460  200380 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 09:34:42.701508  200380 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-173264 NodeName:embed-certs-173264 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:34:42.701675  200380 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-173264"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:34:42.701772  200380 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 09:34:42.709564  200380 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 09:34:42.709635  200380 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:34:42.716931  200380 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1025 09:34:42.729668  200380 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:34:42.742224  200380 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1025 09:34:42.755924  200380 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1025 09:34:42.759853  200380 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:34:42.769678  200380 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:34:42.882548  200380 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:34:42.898300  200380 certs.go:69] Setting up /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/embed-certs-173264 for IP: 192.168.85.2
	I1025 09:34:42.898321  200380 certs.go:195] generating shared ca certs ...
	I1025 09:34:42.898337  200380 certs.go:227] acquiring lock for ca certs: {Name:mke81a91ad41248e67f7acba9fb2e71e1c110e7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:34:42.898552  200380 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21796-2312/.minikube/ca.key
	I1025 09:34:42.898621  200380 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.key
	I1025 09:34:42.898632  200380 certs.go:257] generating profile certs ...
	I1025 09:34:42.898737  200380 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/embed-certs-173264/client.key
	I1025 09:34:42.898823  200380 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/embed-certs-173264/apiserver.key.cec4835f
	I1025 09:34:42.898894  200380 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/embed-certs-173264/proxy-client.key
	I1025 09:34:42.899040  200380 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/4110.pem (1338 bytes)
	W1025 09:34:42.899090  200380 certs.go:480] ignoring /home/jenkins/minikube-integration/21796-2312/.minikube/certs/4110_empty.pem, impossibly tiny 0 bytes
	I1025 09:34:42.899105  200380 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 09:34:42.899133  200380 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem (1082 bytes)
	I1025 09:34:42.899189  200380 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:34:42.899220  200380 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/key.pem (1679 bytes)
	I1025 09:34:42.899284  200380 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem (1708 bytes)
	I1025 09:34:42.899870  200380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:34:42.926753  200380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 09:34:42.947085  200380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:34:42.978833  200380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 09:34:43.000791  200380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/embed-certs-173264/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1025 09:34:43.022233  200380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/embed-certs-173264/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 09:34:43.046852  200380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/embed-certs-173264/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:34:43.072016  200380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/embed-certs-173264/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 09:34:43.097218  200380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:34:43.125412  200380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/certs/4110.pem --> /usr/share/ca-certificates/4110.pem (1338 bytes)
	I1025 09:34:43.153127  200380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem --> /usr/share/ca-certificates/41102.pem (1708 bytes)
	I1025 09:34:43.172754  200380 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:34:43.189003  200380 ssh_runner.go:195] Run: openssl version
	I1025 09:34:43.196652  200380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:34:43.205745  200380 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:34:43.209550  200380 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:34:43.209632  200380 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:34:43.253804  200380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:34:43.261654  200380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4110.pem && ln -fs /usr/share/ca-certificates/4110.pem /etc/ssl/certs/4110.pem"
	I1025 09:34:43.269677  200380 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4110.pem
	I1025 09:34:43.273355  200380 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 08:37 /usr/share/ca-certificates/4110.pem
	I1025 09:34:43.273415  200380 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4110.pem
	I1025 09:34:43.319092  200380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4110.pem /etc/ssl/certs/51391683.0"
	I1025 09:34:43.326755  200380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41102.pem && ln -fs /usr/share/ca-certificates/41102.pem /etc/ssl/certs/41102.pem"
	I1025 09:34:43.335043  200380 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41102.pem
	I1025 09:34:43.338953  200380 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 08:37 /usr/share/ca-certificates/41102.pem
	I1025 09:34:43.339017  200380 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41102.pem
	I1025 09:34:43.380989  200380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41102.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 09:34:43.389710  200380 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:34:43.394110  200380 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 09:34:43.435752  200380 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 09:34:43.477474  200380 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 09:34:43.522977  200380 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 09:34:43.575053  200380 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 09:34:43.644584  200380 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 09:34:43.700882  200380 kubeadm.go:400] StartCluster: {Name:embed-certs-173264 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-173264 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:34:43.700992  200380 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:34:43.701060  200380 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:34:43.770432  200380 cri.go:89] found id: "d6ad6127ca83d1792eb0f03aca451cdd0a78c05c4baaecc9fd3ec902ddd40c88"
	I1025 09:34:43.770471  200380 cri.go:89] found id: "8f686a2912e6c0a8d6e4d5311cba470140c0c77f3d59d36367a840a7e2c18a5b"
	I1025 09:34:43.770477  200380 cri.go:89] found id: "0bd9ad4a667885f7374b118b9cdffe51d851fae7ec99302cc3b3126ee7b47b5a"
	I1025 09:34:43.770481  200380 cri.go:89] found id: ""
	I1025 09:34:43.770536  200380 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 09:34:43.787425  200380 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:34:43Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:34:43.787513  200380 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 09:34:43.806769  200380 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 09:34:43.806787  200380 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 09:34:43.806850  200380 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 09:34:43.826245  200380 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 09:34:43.826841  200380 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-173264" does not appear in /home/jenkins/minikube-integration/21796-2312/kubeconfig
	I1025 09:34:43.827122  200380 kubeconfig.go:62] /home/jenkins/minikube-integration/21796-2312/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-173264" cluster setting kubeconfig missing "embed-certs-173264" context setting]
	I1025 09:34:43.827673  200380 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/kubeconfig: {Name:mk29749a0edbb941c188588ea6aef25a517ab571 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:34:43.829062  200380 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 09:34:43.844782  200380 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1025 09:34:43.844816  200380 kubeadm.go:601] duration metric: took 38.01646ms to restartPrimaryControlPlane
	I1025 09:34:43.844825  200380 kubeadm.go:402] duration metric: took 143.952796ms to StartCluster
	I1025 09:34:43.844844  200380 settings.go:142] acquiring lock: {Name:mk0d8a31751f5e359928da7d7271367bd4b397fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:34:43.844908  200380 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21796-2312/kubeconfig
	I1025 09:34:43.846240  200380 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/kubeconfig: {Name:mk29749a0edbb941c188588ea6aef25a517ab571 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:34:43.846462  200380 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:34:43.846762  200380 config.go:182] Loaded profile config "embed-certs-173264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:34:43.846819  200380 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 09:34:43.846889  200380 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-173264"
	I1025 09:34:43.846902  200380 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-173264"
	W1025 09:34:43.846908  200380 addons.go:247] addon storage-provisioner should already be in state true
	I1025 09:34:43.846938  200380 host.go:66] Checking if "embed-certs-173264" exists ...
	I1025 09:34:43.846967  200380 addons.go:69] Setting dashboard=true in profile "embed-certs-173264"
	I1025 09:34:43.847021  200380 addons.go:238] Setting addon dashboard=true in "embed-certs-173264"
	W1025 09:34:43.847042  200380 addons.go:247] addon dashboard should already be in state true
	I1025 09:34:43.847099  200380 host.go:66] Checking if "embed-certs-173264" exists ...
	I1025 09:34:43.847398  200380 cli_runner.go:164] Run: docker container inspect embed-certs-173264 --format={{.State.Status}}
	I1025 09:34:43.847704  200380 cli_runner.go:164] Run: docker container inspect embed-certs-173264 --format={{.State.Status}}
	I1025 09:34:43.850085  200380 addons.go:69] Setting default-storageclass=true in profile "embed-certs-173264"
	I1025 09:34:43.850264  200380 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-173264"
	I1025 09:34:43.851334  200380 cli_runner.go:164] Run: docker container inspect embed-certs-173264 --format={{.State.Status}}
	I1025 09:34:43.858382  200380 out.go:179] * Verifying Kubernetes components...
	I1025 09:34:43.862226  200380 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:34:43.905425  200380 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1025 09:34:43.908335  200380 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1025 09:34:43.908442  200380 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 09:34:43.911179  200380 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:34:43.911200  200380 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 09:34:43.911266  200380 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-173264
	I1025 09:34:43.911403  200380 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1025 09:34:43.911413  200380 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1025 09:34:43.911446  200380 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-173264
	I1025 09:34:43.949312  200380 addons.go:238] Setting addon default-storageclass=true in "embed-certs-173264"
	W1025 09:34:43.949338  200380 addons.go:247] addon default-storageclass should already be in state true
	I1025 09:34:43.949361  200380 host.go:66] Checking if "embed-certs-173264" exists ...
	I1025 09:34:43.949817  200380 cli_runner.go:164] Run: docker container inspect embed-certs-173264 --format={{.State.Status}}
	I1025 09:34:43.972151  200380 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/embed-certs-173264/id_rsa Username:docker}
	I1025 09:34:43.990353  200380 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/embed-certs-173264/id_rsa Username:docker}
	I1025 09:34:43.998358  200380 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 09:34:43.998380  200380 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 09:34:43.998443  200380 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-173264
	I1025 09:34:44.027918  200380 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/embed-certs-173264/id_rsa Username:docker}
	I1025 09:34:44.207116  200380 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:34:44.241207  200380 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:34:44.296028  200380 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 09:34:44.311112  200380 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1025 09:34:44.311143  200380 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1025 09:34:44.343681  200380 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1025 09:34:44.343713  200380 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1025 09:34:44.445575  200380 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1025 09:34:44.445599  200380 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1025 09:34:44.513451  200380 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1025 09:34:44.513475  200380 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1025 09:34:44.529607  200380 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1025 09:34:44.529631  200380 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1025 09:34:44.544886  200380 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1025 09:34:44.544908  200380 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1025 09:34:44.559901  200380 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1025 09:34:44.559924  200380 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1025 09:34:44.578857  200380 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1025 09:34:44.578881  200380 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1025 09:34:44.598828  200380 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 09:34:44.598851  200380 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1025 09:34:44.618997  200380 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1025 09:34:43.380391  197465 pod_ready.go:104] pod "coredns-66bc5c9577-b266v" is not "Ready", error: <nil>
	W1025 09:34:45.381520  197465 pod_ready.go:104] pod "coredns-66bc5c9577-b266v" is not "Ready", error: <nil>
	I1025 09:34:50.236048  200380 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.994763458s)
	I1025 09:34:50.236095  200380 node_ready.go:35] waiting up to 6m0s for node "embed-certs-173264" to be "Ready" ...
	I1025 09:34:50.236392  200380 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.940336745s)
	I1025 09:34:50.236651  200380 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.617608499s)
	I1025 09:34:50.236937  200380 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.029739765s)
	I1025 09:34:50.239779  200380 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-173264 addons enable metrics-server
	
	I1025 09:34:50.259812  200380 node_ready.go:49] node "embed-certs-173264" is "Ready"
	I1025 09:34:50.259911  200380 node_ready.go:38] duration metric: took 23.783192ms for node "embed-certs-173264" to be "Ready" ...
	I1025 09:34:50.259939  200380 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:34:50.260025  200380 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:34:50.275415  200380 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1025 09:34:50.278375  200380 addons.go:514] duration metric: took 6.431549385s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1025 09:34:50.283052  200380 api_server.go:72] duration metric: took 6.436556552s to wait for apiserver process to appear ...
	I1025 09:34:50.283128  200380 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:34:50.283172  200380 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:34:50.292448  200380 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1025 09:34:50.293751  200380 api_server.go:141] control plane version: v1.34.1
	I1025 09:34:50.293786  200380 api_server.go:131] duration metric: took 10.629546ms to wait for apiserver health ...
	I1025 09:34:50.293797  200380 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 09:34:50.297262  200380 system_pods.go:59] 8 kube-system pods found
	I1025 09:34:50.297306  200380 system_pods.go:61] "coredns-66bc5c9577-vgz5x" [0f0e1eb2-95c0-4e48-9237-fa235bd6c06d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:34:50.297316  200380 system_pods.go:61] "etcd-embed-certs-173264" [614385a1-5378-4984-b048-8d85c96938f2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 09:34:50.297322  200380 system_pods.go:61] "kindnet-862lz" [108ce2e5-6770-4794-a1de-503d2a6ea2a9] Running
	I1025 09:34:50.297330  200380 system_pods.go:61] "kube-apiserver-embed-certs-173264" [ff2383a6-cf45-400b-a449-82c480d2345e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 09:34:50.297350  200380 system_pods.go:61] "kube-controller-manager-embed-certs-173264" [53385786-a0a7-40cc-9e25-ba2224c653bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 09:34:50.297358  200380 system_pods.go:61] "kube-proxy-gwv98" [173eff2d-86b5-4951-9928-37409b52fbab] Running
	I1025 09:34:50.297367  200380 system_pods.go:61] "kube-scheduler-embed-certs-173264" [cbf80061-881a-4991-b7d4-f04920872558] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 09:34:50.297375  200380 system_pods.go:61] "storage-provisioner" [21656d87-d41f-4d4c-87aa-5cbf74c12af2] Running
	I1025 09:34:50.297381  200380 system_pods.go:74] duration metric: took 3.578473ms to wait for pod list to return data ...
	I1025 09:34:50.297389  200380 default_sa.go:34] waiting for default service account to be created ...
	I1025 09:34:50.300105  200380 default_sa.go:45] found service account: "default"
	I1025 09:34:50.300132  200380 default_sa.go:55] duration metric: took 2.72974ms for default service account to be created ...
	I1025 09:34:50.300142  200380 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 09:34:50.303744  200380 system_pods.go:86] 8 kube-system pods found
	I1025 09:34:50.303780  200380 system_pods.go:89] "coredns-66bc5c9577-vgz5x" [0f0e1eb2-95c0-4e48-9237-fa235bd6c06d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:34:50.303790  200380 system_pods.go:89] "etcd-embed-certs-173264" [614385a1-5378-4984-b048-8d85c96938f2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 09:34:50.303796  200380 system_pods.go:89] "kindnet-862lz" [108ce2e5-6770-4794-a1de-503d2a6ea2a9] Running
	I1025 09:34:50.303803  200380 system_pods.go:89] "kube-apiserver-embed-certs-173264" [ff2383a6-cf45-400b-a449-82c480d2345e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 09:34:50.303810  200380 system_pods.go:89] "kube-controller-manager-embed-certs-173264" [53385786-a0a7-40cc-9e25-ba2224c653bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 09:34:50.303821  200380 system_pods.go:89] "kube-proxy-gwv98" [173eff2d-86b5-4951-9928-37409b52fbab] Running
	I1025 09:34:50.303828  200380 system_pods.go:89] "kube-scheduler-embed-certs-173264" [cbf80061-881a-4991-b7d4-f04920872558] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 09:34:50.303835  200380 system_pods.go:89] "storage-provisioner" [21656d87-d41f-4d4c-87aa-5cbf74c12af2] Running
	I1025 09:34:50.303842  200380 system_pods.go:126] duration metric: took 3.694519ms to wait for k8s-apps to be running ...
	I1025 09:34:50.303854  200380 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 09:34:50.303913  200380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:34:50.323203  200380 system_svc.go:56] duration metric: took 19.322059ms WaitForService to wait for kubelet
	I1025 09:34:50.323280  200380 kubeadm.go:586] duration metric: took 6.476787133s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:34:50.323315  200380 node_conditions.go:102] verifying NodePressure condition ...
	I1025 09:34:50.327067  200380 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 09:34:50.327147  200380 node_conditions.go:123] node cpu capacity is 2
	I1025 09:34:50.327173  200380 node_conditions.go:105] duration metric: took 3.836888ms to run NodePressure ...
	I1025 09:34:50.327201  200380 start.go:241] waiting for startup goroutines ...
	I1025 09:34:50.327233  200380 start.go:246] waiting for cluster config update ...
	I1025 09:34:50.327263  200380 start.go:255] writing updated cluster config ...
	I1025 09:34:50.327607  200380 ssh_runner.go:195] Run: rm -f paused
	I1025 09:34:50.331557  200380 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:34:50.336690  200380 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vgz5x" in "kube-system" namespace to be "Ready" or be gone ...
	W1025 09:34:47.879354  197465 pod_ready.go:104] pod "coredns-66bc5c9577-b266v" is not "Ready", error: <nil>
	W1025 09:34:50.378748  197465 pod_ready.go:104] pod "coredns-66bc5c9577-b266v" is not "Ready", error: <nil>
	W1025 09:34:52.379416  197465 pod_ready.go:104] pod "coredns-66bc5c9577-b266v" is not "Ready", error: <nil>
	I1025 09:34:53.884708  197465 pod_ready.go:94] pod "coredns-66bc5c9577-b266v" is "Ready"
	I1025 09:34:53.884733  197465 pod_ready.go:86] duration metric: took 31.511502149s for pod "coredns-66bc5c9577-b266v" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:34:53.888695  197465 pod_ready.go:83] waiting for pod "etcd-no-preload-179869" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:34:53.896110  197465 pod_ready.go:94] pod "etcd-no-preload-179869" is "Ready"
	I1025 09:34:53.896133  197465 pod_ready.go:86] duration metric: took 7.415311ms for pod "etcd-no-preload-179869" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:34:53.905179  197465 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-179869" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:34:53.912774  197465 pod_ready.go:94] pod "kube-apiserver-no-preload-179869" is "Ready"
	I1025 09:34:53.912841  197465 pod_ready.go:86] duration metric: took 7.637435ms for pod "kube-apiserver-no-preload-179869" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:34:53.917034  197465 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-179869" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:34:54.078579  197465 pod_ready.go:94] pod "kube-controller-manager-no-preload-179869" is "Ready"
	I1025 09:34:54.078611  197465 pod_ready.go:86] duration metric: took 161.455743ms for pod "kube-controller-manager-no-preload-179869" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:34:54.277053  197465 pod_ready.go:83] waiting for pod "kube-proxy-7xf9w" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:34:54.677754  197465 pod_ready.go:94] pod "kube-proxy-7xf9w" is "Ready"
	I1025 09:34:54.677785  197465 pod_ready.go:86] duration metric: took 400.703526ms for pod "kube-proxy-7xf9w" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:34:54.877322  197465 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-179869" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:34:55.276915  197465 pod_ready.go:94] pod "kube-scheduler-no-preload-179869" is "Ready"
	I1025 09:34:55.276952  197465 pod_ready.go:86] duration metric: took 399.602972ms for pod "kube-scheduler-no-preload-179869" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:34:55.276965  197465 pod_ready.go:40] duration metric: took 32.908111194s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:34:55.361914  197465 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1025 09:34:55.369382  197465 out.go:179] * Done! kubectl is now configured to use "no-preload-179869" cluster and "default" namespace by default
	W1025 09:34:52.344696  200380 pod_ready.go:104] pod "coredns-66bc5c9577-vgz5x" is not "Ready", error: <nil>
	W1025 09:34:54.843443  200380 pod_ready.go:104] pod "coredns-66bc5c9577-vgz5x" is not "Ready", error: <nil>
	W1025 09:34:57.342552  200380 pod_ready.go:104] pod "coredns-66bc5c9577-vgz5x" is not "Ready", error: <nil>
	W1025 09:34:59.342806  200380 pod_ready.go:104] pod "coredns-66bc5c9577-vgz5x" is not "Ready", error: <nil>
	W1025 09:35:01.343280  200380 pod_ready.go:104] pod "coredns-66bc5c9577-vgz5x" is not "Ready", error: <nil>
	W1025 09:35:03.349628  200380 pod_ready.go:104] pod "coredns-66bc5c9577-vgz5x" is not "Ready", error: <nil>
	W1025 09:35:05.846315  200380 pod_ready.go:104] pod "coredns-66bc5c9577-vgz5x" is not "Ready", error: <nil>
	W1025 09:35:08.346339  200380 pod_ready.go:104] pod "coredns-66bc5c9577-vgz5x" is not "Ready", error: <nil>
	W1025 09:35:10.843362  200380 pod_ready.go:104] pod "coredns-66bc5c9577-vgz5x" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 25 09:34:53 no-preload-179869 crio[652]: time="2025-10-25T09:34:53.03547616Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=e4a28540-c825-44f0-92ba-1619d4934980 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:34:53 no-preload-179869 crio[652]: time="2025-10-25T09:34:53.03650704Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=b7fb1484-b4fe-45a6-9272-a24f66ac77fb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:34:53 no-preload-179869 crio[652]: time="2025-10-25T09:34:53.036618131Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:34:53 no-preload-179869 crio[652]: time="2025-10-25T09:34:53.047304941Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:34:53 no-preload-179869 crio[652]: time="2025-10-25T09:34:53.04749202Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/0533d99364909fb751cc483df338600ec8fd4e4b1c59841babd91e4a1faf2716/merged/etc/passwd: no such file or directory"
	Oct 25 09:34:53 no-preload-179869 crio[652]: time="2025-10-25T09:34:53.047523544Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/0533d99364909fb751cc483df338600ec8fd4e4b1c59841babd91e4a1faf2716/merged/etc/group: no such file or directory"
	Oct 25 09:34:53 no-preload-179869 crio[652]: time="2025-10-25T09:34:53.047802186Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:34:53 no-preload-179869 crio[652]: time="2025-10-25T09:34:53.074366433Z" level=info msg="Created container 02492f2e14bb4c99de4a34ac3755e5a50a8595bf086464df913659f910bc608c: kube-system/storage-provisioner/storage-provisioner" id=b7fb1484-b4fe-45a6-9272-a24f66ac77fb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:34:53 no-preload-179869 crio[652]: time="2025-10-25T09:34:53.082471746Z" level=info msg="Starting container: 02492f2e14bb4c99de4a34ac3755e5a50a8595bf086464df913659f910bc608c" id=b196c27f-e89f-4fb6-82ee-4f8ff4dcebe0 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:34:53 no-preload-179869 crio[652]: time="2025-10-25T09:34:53.085741866Z" level=info msg="Started container" PID=1628 containerID=02492f2e14bb4c99de4a34ac3755e5a50a8595bf086464df913659f910bc608c description=kube-system/storage-provisioner/storage-provisioner id=b196c27f-e89f-4fb6-82ee-4f8ff4dcebe0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=dc4d4670b70d2be7128433b293eb9faef9714c87e169e92e3e7e32b50e378974
	Oct 25 09:35:02 no-preload-179869 crio[652]: time="2025-10-25T09:35:02.127107074Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 09:35:02 no-preload-179869 crio[652]: time="2025-10-25T09:35:02.132274505Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 09:35:02 no-preload-179869 crio[652]: time="2025-10-25T09:35:02.132493363Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 09:35:02 no-preload-179869 crio[652]: time="2025-10-25T09:35:02.132574439Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 09:35:02 no-preload-179869 crio[652]: time="2025-10-25T09:35:02.137226647Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 09:35:02 no-preload-179869 crio[652]: time="2025-10-25T09:35:02.13773853Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 09:35:02 no-preload-179869 crio[652]: time="2025-10-25T09:35:02.138734875Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 09:35:02 no-preload-179869 crio[652]: time="2025-10-25T09:35:02.146615069Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 09:35:02 no-preload-179869 crio[652]: time="2025-10-25T09:35:02.147383587Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 09:35:02 no-preload-179869 crio[652]: time="2025-10-25T09:35:02.147486563Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 09:35:02 no-preload-179869 crio[652]: time="2025-10-25T09:35:02.152750889Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 09:35:02 no-preload-179869 crio[652]: time="2025-10-25T09:35:02.153103238Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 09:35:02 no-preload-179869 crio[652]: time="2025-10-25T09:35:02.15335472Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 09:35:02 no-preload-179869 crio[652]: time="2025-10-25T09:35:02.158355216Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 09:35:02 no-preload-179869 crio[652]: time="2025-10-25T09:35:02.158512207Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	02492f2e14bb4       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           19 seconds ago      Running             storage-provisioner         2                   dc4d4670b70d2       storage-provisioner                          kube-system
	0f5c0e42589c0       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           23 seconds ago      Exited              dashboard-metrics-scraper   2                   07a179dc2dc80       dashboard-metrics-scraper-6ffb444bf9-xgdkt   kubernetes-dashboard
	1fbe8ea6e84e4       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   38 seconds ago      Running             kubernetes-dashboard        0                   cc6658b453b18       kubernetes-dashboard-855c9754f9-mfm5d        kubernetes-dashboard
	df3feec3e7122       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           50 seconds ago      Running             coredns                     1                   30d5c98639ae4       coredns-66bc5c9577-b266v                     kube-system
	9c04ac65697af       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           50 seconds ago      Running             kindnet-cni                 1                   4e8d390cdbce9       kindnet-qjcqv                                kube-system
	6510a34836626       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           50 seconds ago      Exited              storage-provisioner         1                   dc4d4670b70d2       storage-provisioner                          kube-system
	5bbcaa7d690b6       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           50 seconds ago      Running             busybox                     1                   ff10f920138e9       busybox                                      default
	184a1a0c95fe5       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           51 seconds ago      Running             kube-proxy                  1                   09d2a9d4f4c99       kube-proxy-7xf9w                             kube-system
	f5bada5085d9f       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           56 seconds ago      Running             etcd                        1                   f80c76cb3e306       etcd-no-preload-179869                       kube-system
	322d6c0123c7e       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           56 seconds ago      Running             kube-apiserver              1                   c43ff245bbc4d       kube-apiserver-no-preload-179869             kube-system
	e7e2e32307ca9       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           56 seconds ago      Running             kube-controller-manager     1                   e0ea882d7de77       kube-controller-manager-no-preload-179869    kube-system
	a6cb8feabf010       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           56 seconds ago      Running             kube-scheduler              1                   b602f5ff39883       kube-scheduler-no-preload-179869             kube-system
	
	
	==> coredns [df3feec3e7122e0051a6b34445e67aedc3b8e22118eabef976d15e9b8e99540c] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49163 - 50799 "HINFO IN 3930737424186449920.7220986438533358864. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.035542301s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-179869
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-179869
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3de5ce1075e776fa55a26fe8396669cc53a4373
	                    minikube.k8s.io/name=no-preload-179869
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_33_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:33:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-179869
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:35:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:34:50 +0000   Sat, 25 Oct 2025 09:33:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:34:50 +0000   Sat, 25 Oct 2025 09:33:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:34:50 +0000   Sat, 25 Oct 2025 09:33:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:34:50 +0000   Sat, 25 Oct 2025 09:33:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-179869
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                ea4ee067-c337-4055-9d54-e11f82ef0c5b
	  Boot ID:                    89aa6167-e700-462e-8969-7739a9f33dc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-66bc5c9577-b266v                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     110s
	  kube-system                 etcd-no-preload-179869                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         115s
	  kube-system                 kindnet-qjcqv                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-no-preload-179869              250m (12%)    0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-controller-manager-no-preload-179869     200m (10%)    0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-7xf9w                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-no-preload-179869              100m (5%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-xgdkt    0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-mfm5d         0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 107s                 kube-proxy       
	  Normal   Starting                 50s                  kube-proxy       
	  Normal   Starting                 2m6s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m6s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m5s (x8 over 2m5s)  kubelet          Node no-preload-179869 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m5s (x8 over 2m5s)  kubelet          Node no-preload-179869 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m5s (x8 over 2m5s)  kubelet          Node no-preload-179869 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    115s                 kubelet          Node no-preload-179869 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 115s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  115s                 kubelet          Node no-preload-179869 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     115s                 kubelet          Node no-preload-179869 status is now: NodeHasSufficientPID
	  Normal   Starting                 115s                 kubelet          Starting kubelet.
	  Normal   RegisteredNode           111s                 node-controller  Node no-preload-179869 event: Registered Node no-preload-179869 in Controller
	  Normal   NodeReady                93s                  kubelet          Node no-preload-179869 status is now: NodeReady
	  Normal   Starting                 58s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 58s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  58s (x8 over 58s)    kubelet          Node no-preload-179869 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    58s (x8 over 58s)    kubelet          Node no-preload-179869 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     58s (x8 over 58s)    kubelet          Node no-preload-179869 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           48s                  node-controller  Node no-preload-179869 event: Registered Node no-preload-179869 in Controller
	
	
	==> dmesg <==
	[Oct25 09:11] overlayfs: idmapped layers are currently not supported
	[Oct25 09:13] overlayfs: idmapped layers are currently not supported
	[ +18.632418] overlayfs: idmapped layers are currently not supported
	[ +13.271347] overlayfs: idmapped layers are currently not supported
	[Oct25 09:14] overlayfs: idmapped layers are currently not supported
	[Oct25 09:15] overlayfs: idmapped layers are currently not supported
	[ +40.253798] overlayfs: idmapped layers are currently not supported
	[Oct25 09:16] overlayfs: idmapped layers are currently not supported
	[Oct25 09:17] overlayfs: idmapped layers are currently not supported
	[Oct25 09:18] overlayfs: idmapped layers are currently not supported
	[  +9.191019] hrtimer: interrupt took 13462946 ns
	[Oct25 09:20] overlayfs: idmapped layers are currently not supported
	[Oct25 09:22] overlayfs: idmapped layers are currently not supported
	[Oct25 09:23] overlayfs: idmapped layers are currently not supported
	[Oct25 09:24] overlayfs: idmapped layers are currently not supported
	[Oct25 09:28] overlayfs: idmapped layers are currently not supported
	[ +37.283444] overlayfs: idmapped layers are currently not supported
	[Oct25 09:29] overlayfs: idmapped layers are currently not supported
	[ +38.328802] overlayfs: idmapped layers are currently not supported
	[Oct25 09:30] overlayfs: idmapped layers are currently not supported
	[Oct25 09:31] overlayfs: idmapped layers are currently not supported
	[Oct25 09:32] overlayfs: idmapped layers are currently not supported
	[Oct25 09:33] overlayfs: idmapped layers are currently not supported
	[Oct25 09:34] overlayfs: idmapped layers are currently not supported
	[ +28.289706] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [f5bada5085d9fc38a05b5c28df63b45e3d7d2804b79eb6e9472ffcfe51192fcf] <==
	{"level":"warn","ts":"2025-10-25T09:34:18.243619Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:18.267882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:18.293613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:18.311749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:18.326743Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:18.347606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:18.362418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:18.378503Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:18.393536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:18.414736Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:18.434472Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:18.453519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:18.471038Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:18.498052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:18.516095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:18.533551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:18.544346Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:18.561943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:18.601883Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:18.629918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:18.650762Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:18.674488Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:18.697539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:18.715830Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:18.782274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56160","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:35:12 up  1:17,  0 user,  load average: 4.03, 3.66, 2.92
	Linux no-preload-179869 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9c04ac65697af3509d6dea534349d6c8fb0a3f0c0d513b13f56f5066fb198d68] <==
	I1025 09:34:21.939388       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 09:34:21.939696       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1025 09:34:21.939827       1 main.go:148] setting mtu 1500 for CNI 
	I1025 09:34:21.939840       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 09:34:21.939854       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T09:34:22Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 09:34:22.121459       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 09:34:22.121545       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 09:34:22.121581       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 09:34:22.121773       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1025 09:34:52.126381       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1025 09:34:52.126384       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1025 09:34:52.126503       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1025 09:34:52.126517       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1025 09:34:53.522538       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 09:34:53.522574       1 metrics.go:72] Registering metrics
	I1025 09:34:53.522695       1 controller.go:711] "Syncing nftables rules"
	I1025 09:35:02.126256       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1025 09:35:02.126609       1 main.go:301] handling current node
	I1025 09:35:12.130339       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1025 09:35:12.130420       1 main.go:301] handling current node
	
	
	==> kube-apiserver [322d6c0123c7e7fdc2849fe7f0af01136e262450452ab009ce0b5204f1aa3c61] <==
	I1025 09:34:19.887472       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 09:34:19.910082       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1025 09:34:19.920434       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1025 09:34:19.920591       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1025 09:34:19.926140       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:34:19.947629       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1025 09:34:19.951919       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1025 09:34:19.970310       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1025 09:34:19.970330       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1025 09:34:19.976895       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1025 09:34:19.977132       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1025 09:34:19.977182       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 09:34:19.981128       1 cache.go:39] Caches are synced for autoregister controller
	I1025 09:34:19.997411       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1025 09:34:20.477244       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 09:34:20.726406       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 09:34:20.863691       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 09:34:21.221250       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 09:34:21.351576       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 09:34:21.416663       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 09:34:21.746456       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.153.58"}
	I1025 09:34:21.791672       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.195.199"}
	I1025 09:34:24.229687       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 09:34:24.532410       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 09:34:24.640833       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [e7e2e32307ca96424361bfd29933a486f201599a8ece9c4103b9b800c7dc2e1e] <==
	I1025 09:34:24.101520       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1025 09:34:24.101535       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1025 09:34:24.102660       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1025 09:34:24.105565       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:34:24.121114       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1025 09:34:24.121274       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1025 09:34:24.121596       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1025 09:34:24.122067       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:34:24.122189       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 09:34:24.122239       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 09:34:24.122726       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 09:34:24.123308       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1025 09:34:24.124492       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1025 09:34:24.125409       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1025 09:34:24.143392       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1025 09:34:24.146685       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1025 09:34:24.153316       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1025 09:34:24.156460       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1025 09:34:24.156588       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1025 09:34:24.156669       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-179869"
	I1025 09:34:24.156721       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1025 09:34:24.159403       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1025 09:34:24.162966       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1025 09:34:24.165671       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:34:24.172458       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	
	
	==> kube-proxy [184a1a0c95fe5d8445d3fb8d8272b28b468161702f9ecd979fa5d4af4a93e122] <==
	I1025 09:34:21.580959       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:34:22.064717       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:34:22.165133       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:34:22.165177       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1025 09:34:22.165275       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:34:22.198249       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:34:22.198304       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:34:22.202681       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:34:22.203112       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:34:22.203357       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:34:22.204697       1 config.go:200] "Starting service config controller"
	I1025 09:34:22.204749       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:34:22.204770       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:34:22.204775       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:34:22.204786       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:34:22.204790       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:34:22.205623       1 config.go:309] "Starting node config controller"
	I1025 09:34:22.205672       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:34:22.205703       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:34:22.305103       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 09:34:22.305140       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 09:34:22.305210       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a6cb8feabf010cd74e1dfdfebd8f0990900f05746d51f737f324f6b0f0b15aee] <==
	I1025 09:34:18.360662       1 serving.go:386] Generated self-signed cert in-memory
	W1025 09:34:19.838364       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 09:34:19.838397       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 09:34:19.838415       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 09:34:19.838423       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 09:34:19.952887       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 09:34:19.952916       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:34:19.956867       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:34:19.956915       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:34:19.958607       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 09:34:19.958869       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 09:34:20.058263       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 09:34:24 no-preload-179869 kubelet[771]: I1025 09:34:24.733148     771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhw4n\" (UniqueName: \"kubernetes.io/projected/2a0c611c-213f-4b5c-a9fc-5df39eb29500-kube-api-access-lhw4n\") pod \"dashboard-metrics-scraper-6ffb444bf9-xgdkt\" (UID: \"2a0c611c-213f-4b5c-a9fc-5df39eb29500\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xgdkt"
	Oct 25 09:34:24 no-preload-179869 kubelet[771]: I1025 09:34:24.733171     771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jshj6\" (UniqueName: \"kubernetes.io/projected/a50d6b41-01f1-46ca-bcfd-0d1aefe83b4a-kube-api-access-jshj6\") pod \"kubernetes-dashboard-855c9754f9-mfm5d\" (UID: \"a50d6b41-01f1-46ca-bcfd-0d1aefe83b4a\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-mfm5d"
	Oct 25 09:34:24 no-preload-179869 kubelet[771]: I1025 09:34:24.733251     771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a50d6b41-01f1-46ca-bcfd-0d1aefe83b4a-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-mfm5d\" (UID: \"a50d6b41-01f1-46ca-bcfd-0d1aefe83b4a\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-mfm5d"
	Oct 25 09:34:24 no-preload-179869 kubelet[771]: W1025 09:34:24.994651     771 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/021c28390d46e140cb149a266cb60178228c9cf40f85185de0d05f970fd0ddea/crio-cc6658b453b1849e1ddb529b759cf9c01ce3a6a36515b0448171d6d255b3fcc8 WatchSource:0}: Error finding container cc6658b453b1849e1ddb529b759cf9c01ce3a6a36515b0448171d6d255b3fcc8: Status 404 returned error can't find the container with id cc6658b453b1849e1ddb529b759cf9c01ce3a6a36515b0448171d6d255b3fcc8
	Oct 25 09:34:28 no-preload-179869 kubelet[771]: I1025 09:34:28.929926     771 scope.go:117] "RemoveContainer" containerID="f6d0053f55f87e3e8e6778a9c496fe645d6a6f7fb0b158689b75bd4a3f3383eb"
	Oct 25 09:34:29 no-preload-179869 kubelet[771]: I1025 09:34:29.934479     771 scope.go:117] "RemoveContainer" containerID="f6d0053f55f87e3e8e6778a9c496fe645d6a6f7fb0b158689b75bd4a3f3383eb"
	Oct 25 09:34:29 no-preload-179869 kubelet[771]: I1025 09:34:29.934745     771 scope.go:117] "RemoveContainer" containerID="9d87a0d35fd4c9ce5cdf73c1107b1d02e08a1e5de0efa8d33b9ae497e90b594f"
	Oct 25 09:34:29 no-preload-179869 kubelet[771]: E1025 09:34:29.934885     771 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-xgdkt_kubernetes-dashboard(2a0c611c-213f-4b5c-a9fc-5df39eb29500)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xgdkt" podUID="2a0c611c-213f-4b5c-a9fc-5df39eb29500"
	Oct 25 09:34:30 no-preload-179869 kubelet[771]: I1025 09:34:30.938978     771 scope.go:117] "RemoveContainer" containerID="9d87a0d35fd4c9ce5cdf73c1107b1d02e08a1e5de0efa8d33b9ae497e90b594f"
	Oct 25 09:34:30 no-preload-179869 kubelet[771]: E1025 09:34:30.939152     771 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-xgdkt_kubernetes-dashboard(2a0c611c-213f-4b5c-a9fc-5df39eb29500)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xgdkt" podUID="2a0c611c-213f-4b5c-a9fc-5df39eb29500"
	Oct 25 09:34:34 no-preload-179869 kubelet[771]: I1025 09:34:34.916692     771 scope.go:117] "RemoveContainer" containerID="9d87a0d35fd4c9ce5cdf73c1107b1d02e08a1e5de0efa8d33b9ae497e90b594f"
	Oct 25 09:34:34 no-preload-179869 kubelet[771]: E1025 09:34:34.916933     771 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-xgdkt_kubernetes-dashboard(2a0c611c-213f-4b5c-a9fc-5df39eb29500)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xgdkt" podUID="2a0c611c-213f-4b5c-a9fc-5df39eb29500"
	Oct 25 09:34:48 no-preload-179869 kubelet[771]: I1025 09:34:48.791884     771 scope.go:117] "RemoveContainer" containerID="9d87a0d35fd4c9ce5cdf73c1107b1d02e08a1e5de0efa8d33b9ae497e90b594f"
	Oct 25 09:34:49 no-preload-179869 kubelet[771]: I1025 09:34:49.001892     771 scope.go:117] "RemoveContainer" containerID="9d87a0d35fd4c9ce5cdf73c1107b1d02e08a1e5de0efa8d33b9ae497e90b594f"
	Oct 25 09:34:49 no-preload-179869 kubelet[771]: I1025 09:34:49.002279     771 scope.go:117] "RemoveContainer" containerID="0f5c0e42589c00e20e749547249c04c97cb314fd245734ef647815f3c008a16d"
	Oct 25 09:34:49 no-preload-179869 kubelet[771]: E1025 09:34:49.002460     771 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-xgdkt_kubernetes-dashboard(2a0c611c-213f-4b5c-a9fc-5df39eb29500)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xgdkt" podUID="2a0c611c-213f-4b5c-a9fc-5df39eb29500"
	Oct 25 09:34:49 no-preload-179869 kubelet[771]: I1025 09:34:49.037512     771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-mfm5d" podStartSLOduration=16.810966589 podStartE2EDuration="25.037495016s" podCreationTimestamp="2025-10-25 09:34:24 +0000 UTC" firstStartedPulling="2025-10-25 09:34:25.00239123 +0000 UTC m=+10.500899604" lastFinishedPulling="2025-10-25 09:34:33.228919665 +0000 UTC m=+18.727428031" observedRunningTime="2025-10-25 09:34:33.965234896 +0000 UTC m=+19.463743270" watchObservedRunningTime="2025-10-25 09:34:49.037495016 +0000 UTC m=+34.536003382"
	Oct 25 09:34:53 no-preload-179869 kubelet[771]: I1025 09:34:53.033265     771 scope.go:117] "RemoveContainer" containerID="6510a348366267527a1c6d9da08ba9302f7233cf6baf4feee121e3108a4111f9"
	Oct 25 09:34:54 no-preload-179869 kubelet[771]: I1025 09:34:54.917036     771 scope.go:117] "RemoveContainer" containerID="0f5c0e42589c00e20e749547249c04c97cb314fd245734ef647815f3c008a16d"
	Oct 25 09:34:54 no-preload-179869 kubelet[771]: E1025 09:34:54.917701     771 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-xgdkt_kubernetes-dashboard(2a0c611c-213f-4b5c-a9fc-5df39eb29500)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xgdkt" podUID="2a0c611c-213f-4b5c-a9fc-5df39eb29500"
	Oct 25 09:35:06 no-preload-179869 kubelet[771]: I1025 09:35:06.789906     771 scope.go:117] "RemoveContainer" containerID="0f5c0e42589c00e20e749547249c04c97cb314fd245734ef647815f3c008a16d"
	Oct 25 09:35:06 no-preload-179869 kubelet[771]: E1025 09:35:06.790558     771 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-xgdkt_kubernetes-dashboard(2a0c611c-213f-4b5c-a9fc-5df39eb29500)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xgdkt" podUID="2a0c611c-213f-4b5c-a9fc-5df39eb29500"
	Oct 25 09:35:07 no-preload-179869 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 09:35:07 no-preload-179869 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 09:35:07 no-preload-179869 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [1fbe8ea6e84e49a7a97e66e41ae18a032f04d7efd3715340156d52f83e56d5f9] <==
	2025/10/25 09:34:33 Starting overwatch
	2025/10/25 09:34:33 Using namespace: kubernetes-dashboard
	2025/10/25 09:34:33 Using in-cluster config to connect to apiserver
	2025/10/25 09:34:33 Using secret token for csrf signing
	2025/10/25 09:34:33 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/25 09:34:33 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/25 09:34:33 Successful initial request to the apiserver, version: v1.34.1
	2025/10/25 09:34:33 Generating JWE encryption key
	2025/10/25 09:34:33 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/25 09:34:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/25 09:34:33 Initializing JWE encryption key from synchronized object
	2025/10/25 09:34:33 Creating in-cluster Sidecar client
	2025/10/25 09:34:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 09:34:33 Serving insecurely on HTTP port: 9090
	2025/10/25 09:35:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [02492f2e14bb4c99de4a34ac3755e5a50a8595bf086464df913659f910bc608c] <==
	I1025 09:34:53.098738       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 09:34:53.114871       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 09:34:53.115664       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1025 09:34:53.127536       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:34:56.583040       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:35:00.843756       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:35:04.443154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:35:07.496219       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:35:10.518542       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:35:10.529743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 09:35:10.530528       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6556c1b4-f5ed-4c56-8bcf-e108cc1d8bad", APIVersion:"v1", ResourceVersion:"678", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-179869_f5bf9a2f-96af-4a86-97ee-47a4cf570d3c became leader
	I1025 09:35:10.530575       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 09:35:10.530664       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-179869_f5bf9a2f-96af-4a86-97ee-47a4cf570d3c!
	W1025 09:35:10.540133       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:35:10.546304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 09:35:10.631024       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-179869_f5bf9a2f-96af-4a86-97ee-47a4cf570d3c!
	W1025 09:35:12.549947       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:35:12.555516       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [6510a348366267527a1c6d9da08ba9302f7233cf6baf4feee121e3108a4111f9] <==
	I1025 09:34:22.026082       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1025 09:34:52.028250       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-179869 -n no-preload-179869
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-179869 -n no-preload-179869: exit status 2 (430.222243ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-179869 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (7.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-173264 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p embed-certs-173264 --alsologtostderr -v=1: exit status 80 (2.171907897s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-173264 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:35:36.761515  206009 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:35:36.761811  206009 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:35:36.761825  206009 out.go:374] Setting ErrFile to fd 2...
	I1025 09:35:36.761833  206009 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:35:36.762121  206009 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-2312/.minikube/bin
	I1025 09:35:36.762393  206009 out.go:368] Setting JSON to false
	I1025 09:35:36.762419  206009 mustload.go:65] Loading cluster: embed-certs-173264
	I1025 09:35:36.762810  206009 config.go:182] Loaded profile config "embed-certs-173264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:35:36.763284  206009 cli_runner.go:164] Run: docker container inspect embed-certs-173264 --format={{.State.Status}}
	I1025 09:35:36.795660  206009 host.go:66] Checking if "embed-certs-173264" exists ...
	I1025 09:35:36.795983  206009 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:35:36.903322  206009 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-25 09:35:36.88905179 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:35:36.904202  206009 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-173264 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1025 09:35:36.907720  206009 out.go:179] * Pausing node embed-certs-173264 ... 
	I1025 09:35:36.911559  206009 host.go:66] Checking if "embed-certs-173264" exists ...
	I1025 09:35:36.911912  206009 ssh_runner.go:195] Run: systemctl --version
	I1025 09:35:36.911967  206009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-173264
	I1025 09:35:36.935473  206009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/embed-certs-173264/id_rsa Username:docker}
	I1025 09:35:37.042484  206009 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:35:37.063834  206009 pause.go:52] kubelet running: true
	I1025 09:35:37.063905  206009 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 09:35:37.362351  206009 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 09:35:37.362424  206009 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 09:35:37.458860  206009 cri.go:89] found id: "554118d5ba9b888eff74911aeb9bc49200cf2e408aa2932423cd99c6fddc0070"
	I1025 09:35:37.458933  206009 cri.go:89] found id: "b3142984c1fee9ffc110ab096d6a0855ca60948dfef193f2d783f2d20bd9886e"
	I1025 09:35:37.458963  206009 cri.go:89] found id: "1cfe71fd1fe7a1b6bea21b990d2a3dbcc5dd1b17f294993c302c09956f95be67"
	I1025 09:35:37.458982  206009 cri.go:89] found id: "39a267437a269c4557082094c09ced55f8c3e472342c82d7fd03ae3a25b0f17e"
	I1025 09:35:37.459014  206009 cri.go:89] found id: "0b944f32fb8b576132d82481361923d19fdefbaee98287df71455fe148002ac8"
	I1025 09:35:37.459036  206009 cri.go:89] found id: "ddedd8b79fda0b2dea509f8022b5abaeb8024fdc8737fab8024dee99c98d3b19"
	I1025 09:35:37.459059  206009 cri.go:89] found id: "d6ad6127ca83d1792eb0f03aca451cdd0a78c05c4baaecc9fd3ec902ddd40c88"
	I1025 09:35:37.459094  206009 cri.go:89] found id: "8f686a2912e6c0a8d6e4d5311cba470140c0c77f3d59d36367a840a7e2c18a5b"
	I1025 09:35:37.459115  206009 cri.go:89] found id: "0bd9ad4a667885f7374b118b9cdffe51d851fae7ec99302cc3b3126ee7b47b5a"
	I1025 09:35:37.459137  206009 cri.go:89] found id: "d635ff4cda8268a9abc07881a8b41b2e3801e381e546026d0ae96f16d82bfcd1"
	I1025 09:35:37.459172  206009 cri.go:89] found id: "bb24bf5fb5b2b235640af84b0d07ce607dc3657b1f1f6569aef374242880b7fb"
	I1025 09:35:37.459197  206009 cri.go:89] found id: ""
	I1025 09:35:37.459278  206009 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:35:37.470965  206009 retry.go:31] will retry after 276.950493ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:35:37Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:35:37.748498  206009 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:35:37.762923  206009 pause.go:52] kubelet running: false
	I1025 09:35:37.763050  206009 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 09:35:38.056686  206009 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 09:35:38.056827  206009 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 09:35:38.126691  206009 cri.go:89] found id: "554118d5ba9b888eff74911aeb9bc49200cf2e408aa2932423cd99c6fddc0070"
	I1025 09:35:38.126723  206009 cri.go:89] found id: "b3142984c1fee9ffc110ab096d6a0855ca60948dfef193f2d783f2d20bd9886e"
	I1025 09:35:38.126729  206009 cri.go:89] found id: "1cfe71fd1fe7a1b6bea21b990d2a3dbcc5dd1b17f294993c302c09956f95be67"
	I1025 09:35:38.126733  206009 cri.go:89] found id: "39a267437a269c4557082094c09ced55f8c3e472342c82d7fd03ae3a25b0f17e"
	I1025 09:35:38.126736  206009 cri.go:89] found id: "0b944f32fb8b576132d82481361923d19fdefbaee98287df71455fe148002ac8"
	I1025 09:35:38.126740  206009 cri.go:89] found id: "ddedd8b79fda0b2dea509f8022b5abaeb8024fdc8737fab8024dee99c98d3b19"
	I1025 09:35:38.126743  206009 cri.go:89] found id: "d6ad6127ca83d1792eb0f03aca451cdd0a78c05c4baaecc9fd3ec902ddd40c88"
	I1025 09:35:38.126746  206009 cri.go:89] found id: "8f686a2912e6c0a8d6e4d5311cba470140c0c77f3d59d36367a840a7e2c18a5b"
	I1025 09:35:38.126749  206009 cri.go:89] found id: "0bd9ad4a667885f7374b118b9cdffe51d851fae7ec99302cc3b3126ee7b47b5a"
	I1025 09:35:38.126785  206009 cri.go:89] found id: "d635ff4cda8268a9abc07881a8b41b2e3801e381e546026d0ae96f16d82bfcd1"
	I1025 09:35:38.126789  206009 cri.go:89] found id: "bb24bf5fb5b2b235640af84b0d07ce607dc3657b1f1f6569aef374242880b7fb"
	I1025 09:35:38.126792  206009 cri.go:89] found id: ""
	I1025 09:35:38.126872  206009 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:35:38.137437  206009 retry.go:31] will retry after 337.145823ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:35:38Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:35:38.474796  206009 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:35:38.490008  206009 pause.go:52] kubelet running: false
	I1025 09:35:38.490116  206009 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 09:35:38.699011  206009 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 09:35:38.699116  206009 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 09:35:38.796589  206009 cri.go:89] found id: "554118d5ba9b888eff74911aeb9bc49200cf2e408aa2932423cd99c6fddc0070"
	I1025 09:35:38.796625  206009 cri.go:89] found id: "b3142984c1fee9ffc110ab096d6a0855ca60948dfef193f2d783f2d20bd9886e"
	I1025 09:35:38.796631  206009 cri.go:89] found id: "1cfe71fd1fe7a1b6bea21b990d2a3dbcc5dd1b17f294993c302c09956f95be67"
	I1025 09:35:38.796652  206009 cri.go:89] found id: "39a267437a269c4557082094c09ced55f8c3e472342c82d7fd03ae3a25b0f17e"
	I1025 09:35:38.796672  206009 cri.go:89] found id: "0b944f32fb8b576132d82481361923d19fdefbaee98287df71455fe148002ac8"
	I1025 09:35:38.796676  206009 cri.go:89] found id: "ddedd8b79fda0b2dea509f8022b5abaeb8024fdc8737fab8024dee99c98d3b19"
	I1025 09:35:38.796679  206009 cri.go:89] found id: "d6ad6127ca83d1792eb0f03aca451cdd0a78c05c4baaecc9fd3ec902ddd40c88"
	I1025 09:35:38.796683  206009 cri.go:89] found id: "8f686a2912e6c0a8d6e4d5311cba470140c0c77f3d59d36367a840a7e2c18a5b"
	I1025 09:35:38.796686  206009 cri.go:89] found id: "0bd9ad4a667885f7374b118b9cdffe51d851fae7ec99302cc3b3126ee7b47b5a"
	I1025 09:35:38.796693  206009 cri.go:89] found id: "d635ff4cda8268a9abc07881a8b41b2e3801e381e546026d0ae96f16d82bfcd1"
	I1025 09:35:38.796702  206009 cri.go:89] found id: "bb24bf5fb5b2b235640af84b0d07ce607dc3657b1f1f6569aef374242880b7fb"
	I1025 09:35:38.796706  206009 cri.go:89] found id: ""
	I1025 09:35:38.796785  206009 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:35:38.812069  206009 out.go:203] 
	W1025 09:35:38.815271  206009 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:35:38Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:35:38Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:35:38.815296  206009 out.go:285] * 
	* 
	W1025 09:35:38.821707  206009 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:35:38.824943  206009 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p embed-certs-173264 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-173264
helpers_test.go:243: (dbg) docker inspect embed-certs-173264:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7ab6ed1b9ea6708ddac2c85d334494b29a2d55d0bac7a2069e7f45087d3443ef",
	        "Created": "2025-10-25T09:32:48.526873954Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 200507,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:34:36.189440371Z",
	            "FinishedAt": "2025-10-25T09:34:35.250036912Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/7ab6ed1b9ea6708ddac2c85d334494b29a2d55d0bac7a2069e7f45087d3443ef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7ab6ed1b9ea6708ddac2c85d334494b29a2d55d0bac7a2069e7f45087d3443ef/hostname",
	        "HostsPath": "/var/lib/docker/containers/7ab6ed1b9ea6708ddac2c85d334494b29a2d55d0bac7a2069e7f45087d3443ef/hosts",
	        "LogPath": "/var/lib/docker/containers/7ab6ed1b9ea6708ddac2c85d334494b29a2d55d0bac7a2069e7f45087d3443ef/7ab6ed1b9ea6708ddac2c85d334494b29a2d55d0bac7a2069e7f45087d3443ef-json.log",
	        "Name": "/embed-certs-173264",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-173264:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-173264",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7ab6ed1b9ea6708ddac2c85d334494b29a2d55d0bac7a2069e7f45087d3443ef",
	                "LowerDir": "/var/lib/docker/overlay2/f31af6c3318ffc600cbea3cfd23719cc69a1f1792d31e48077fe84ae405b9fc8-init/diff:/var/lib/docker/overlay2/cef79fc16ca3e257f9c9390d34fae091a7f7fa219913b2606caf1922ea50ed93/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f31af6c3318ffc600cbea3cfd23719cc69a1f1792d31e48077fe84ae405b9fc8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f31af6c3318ffc600cbea3cfd23719cc69a1f1792d31e48077fe84ae405b9fc8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f31af6c3318ffc600cbea3cfd23719cc69a1f1792d31e48077fe84ae405b9fc8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-173264",
	                "Source": "/var/lib/docker/volumes/embed-certs-173264/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-173264",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-173264",
	                "name.minikube.sigs.k8s.io": "embed-certs-173264",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b28ae31e6379d7040dec40b9fec7ae1982dea4cc23e4da745f1b3db1f8133312",
	            "SandboxKey": "/var/run/docker/netns/b28ae31e6379",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-173264": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fa:33:5f:c9:b8:df",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2d181aa3ece229a97886c4873dbb8eca8797c23a56c68ee43959cebc56f78ff8",
	                    "EndpointID": "610f47692ebd9dc9591e9dc4c7087b8ff2a404d909f95a33f967b6fc7572cb8b",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-173264",
	                        "7ab6ed1b9ea6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-173264 -n embed-certs-173264
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-173264 -n embed-certs-173264: exit status 2 (433.614221ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-173264 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-173264 logs -n 25: (1.60490123s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-881642 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-881642       │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │ 25 Oct 25 09:32 UTC │
	│ image   │ old-k8s-version-881642 image list --format=json                                                                                                                                                                                               │ old-k8s-version-881642       │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:32 UTC │
	│ pause   │ -p old-k8s-version-881642 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-881642       │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │                     │
	│ start   │ -p cert-expiration-440252 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-440252       │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:32 UTC │
	│ delete  │ -p old-k8s-version-881642                                                                                                                                                                                                                     │ old-k8s-version-881642       │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:32 UTC │
	│ delete  │ -p old-k8s-version-881642                                                                                                                                                                                                                     │ old-k8s-version-881642       │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:32 UTC │
	│ start   │ -p no-preload-179869 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-179869            │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:33 UTC │
	│ delete  │ -p cert-expiration-440252                                                                                                                                                                                                                     │ cert-expiration-440252       │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:32 UTC │
	│ start   │ -p embed-certs-173264 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-173264           │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:34 UTC │
	│ addons  │ enable metrics-server -p no-preload-179869 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-179869            │ jenkins │ v1.37.0 │ 25 Oct 25 09:33 UTC │                     │
	│ stop    │ -p no-preload-179869 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-179869            │ jenkins │ v1.37.0 │ 25 Oct 25 09:33 UTC │ 25 Oct 25 09:34 UTC │
	│ addons  │ enable dashboard -p no-preload-179869 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-179869            │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │ 25 Oct 25 09:34 UTC │
	│ start   │ -p no-preload-179869 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-179869            │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │ 25 Oct 25 09:34 UTC │
	│ addons  │ enable metrics-server -p embed-certs-173264 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-173264           │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │                     │
	│ stop    │ -p embed-certs-173264 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-173264           │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │ 25 Oct 25 09:34 UTC │
	│ addons  │ enable dashboard -p embed-certs-173264 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-173264           │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │ 25 Oct 25 09:34 UTC │
	│ start   │ -p embed-certs-173264 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-173264           │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │ 25 Oct 25 09:35 UTC │
	│ image   │ no-preload-179869 image list --format=json                                                                                                                                                                                                    │ no-preload-179869            │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
	│ pause   │ -p no-preload-179869 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-179869            │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │                     │
	│ delete  │ -p no-preload-179869                                                                                                                                                                                                                          │ no-preload-179869            │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
	│ delete  │ -p no-preload-179869                                                                                                                                                                                                                          │ no-preload-179869            │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
	│ delete  │ -p disable-driver-mounts-901717                                                                                                                                                                                                               │ disable-driver-mounts-901717 │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
	│ start   │ -p default-k8s-diff-port-666079 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-666079 │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │                     │
	│ image   │ embed-certs-173264 image list --format=json                                                                                                                                                                                                   │ embed-certs-173264           │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
	│ pause   │ -p embed-certs-173264 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-173264           │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:35:16
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:35:16.734589  203993 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:35:16.734776  203993 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:35:16.734803  203993 out.go:374] Setting ErrFile to fd 2...
	I1025 09:35:16.734820  203993 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:35:16.735108  203993 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-2312/.minikube/bin
	I1025 09:35:16.735562  203993 out.go:368] Setting JSON to false
	I1025 09:35:16.736585  203993 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4668,"bootTime":1761380249,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1025 09:35:16.736683  203993 start.go:141] virtualization:  
	I1025 09:35:16.740570  203993 out.go:179] * [default-k8s-diff-port-666079] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 09:35:16.744829  203993 out.go:179]   - MINIKUBE_LOCATION=21796
	I1025 09:35:16.744840  203993 notify.go:220] Checking for updates...
	I1025 09:35:16.748196  203993 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:35:16.751439  203993 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21796-2312/kubeconfig
	I1025 09:35:16.754537  203993 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-2312/.minikube
	I1025 09:35:16.757681  203993 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 09:35:16.760626  203993 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:35:16.764118  203993 config.go:182] Loaded profile config "embed-certs-173264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:35:16.764225  203993 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:35:16.792364  203993 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 09:35:16.792505  203993 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:35:16.865818  203993 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 09:35:16.855660379 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:35:16.865936  203993 docker.go:318] overlay module found
	I1025 09:35:16.869309  203993 out.go:179] * Using the docker driver based on user configuration
	I1025 09:35:16.872296  203993 start.go:305] selected driver: docker
	I1025 09:35:16.872320  203993 start.go:925] validating driver "docker" against <nil>
	I1025 09:35:16.872335  203993 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:35:16.873123  203993 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:35:16.932413  203993 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 09:35:16.923369159 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:35:16.932567  203993 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 09:35:16.932811  203993 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:35:16.935903  203993 out.go:179] * Using Docker driver with root privileges
	I1025 09:35:16.938743  203993 cni.go:84] Creating CNI manager for ""
	I1025 09:35:16.938821  203993 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:35:16.938837  203993 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 09:35:16.938916  203993 start.go:349] cluster config:
	{Name:default-k8s-diff-port-666079 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-666079 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:35:16.942155  203993 out.go:179] * Starting "default-k8s-diff-port-666079" primary control-plane node in "default-k8s-diff-port-666079" cluster
	I1025 09:35:16.945027  203993 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 09:35:16.948068  203993 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:35:16.950880  203993 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:35:16.950938  203993 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21796-2312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1025 09:35:16.950953  203993 cache.go:58] Caching tarball of preloaded images
	I1025 09:35:16.951037  203993 preload.go:233] Found /home/jenkins/minikube-integration/21796-2312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 09:35:16.951051  203993 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 09:35:16.951169  203993 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/config.json ...
	I1025 09:35:16.951192  203993 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/config.json: {Name:mked4acc6ba01c7e06ccc90737ed7af84ba155de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:35:16.951353  203993 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:35:16.971030  203993 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 09:35:16.971058  203993 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 09:35:16.971071  203993 cache.go:232] Successfully downloaded all kic artifacts
	I1025 09:35:16.971094  203993 start.go:360] acquireMachinesLock for default-k8s-diff-port-666079: {Name:mk25f9f0a43388f7cdd9c3ecfcc6756ef82b00a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:35:16.971199  203993 start.go:364] duration metric: took 86.745µs to acquireMachinesLock for "default-k8s-diff-port-666079"
	I1025 09:35:16.971243  203993 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-666079 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-666079 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:35:16.971324  203993 start.go:125] createHost starting for "" (driver="docker")
	W1025 09:35:17.342456  200380 pod_ready.go:104] pod "coredns-66bc5c9577-vgz5x" is not "Ready", error: <nil>
	W1025 09:35:19.343507  200380 pod_ready.go:104] pod "coredns-66bc5c9577-vgz5x" is not "Ready", error: <nil>
	I1025 09:35:16.974805  203993 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1025 09:35:16.975042  203993 start.go:159] libmachine.API.Create for "default-k8s-diff-port-666079" (driver="docker")
	I1025 09:35:16.975084  203993 client.go:168] LocalClient.Create starting
	I1025 09:35:16.975167  203993 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem
	I1025 09:35:16.975205  203993 main.go:141] libmachine: Decoding PEM data...
	I1025 09:35:16.975222  203993 main.go:141] libmachine: Parsing certificate...
	I1025 09:35:16.975278  203993 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem
	I1025 09:35:16.975302  203993 main.go:141] libmachine: Decoding PEM data...
	I1025 09:35:16.975312  203993 main.go:141] libmachine: Parsing certificate...
	I1025 09:35:16.975708  203993 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-666079 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 09:35:16.992268  203993 cli_runner.go:211] docker network inspect default-k8s-diff-port-666079 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 09:35:16.992367  203993 network_create.go:284] running [docker network inspect default-k8s-diff-port-666079] to gather additional debugging logs...
	I1025 09:35:16.992388  203993 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-666079
	W1025 09:35:17.014851  203993 cli_runner.go:211] docker network inspect default-k8s-diff-port-666079 returned with exit code 1
	I1025 09:35:17.014884  203993 network_create.go:287] error running [docker network inspect default-k8s-diff-port-666079]: docker network inspect default-k8s-diff-port-666079: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-666079 not found
	I1025 09:35:17.014916  203993 network_create.go:289] output of [docker network inspect default-k8s-diff-port-666079]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-666079 not found
	
	** /stderr **
	I1025 09:35:17.015030  203993 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:35:17.031591  203993 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-4076b76bdd01 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5e:93:ad:e4:3e:11} reservation:<nil>}
	I1025 09:35:17.032017  203993 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ab40ae949743 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ea:83:23:78:ca:4d} reservation:<nil>}
	I1025 09:35:17.032340  203993 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ff3fdd90dcc2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8e:d4:a3:43:c3:da} reservation:<nil>}
	I1025 09:35:17.032868  203993 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a3b0f0}
	I1025 09:35:17.032893  203993 network_create.go:124] attempt to create docker network default-k8s-diff-port-666079 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1025 09:35:17.033090  203993 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-666079 default-k8s-diff-port-666079
	I1025 09:35:17.107320  203993 network_create.go:108] docker network default-k8s-diff-port-666079 192.168.76.0/24 created
	I1025 09:35:17.107354  203993 kic.go:121] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-666079" container
	I1025 09:35:17.107425  203993 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 09:35:17.125191  203993 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-666079 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-666079 --label created_by.minikube.sigs.k8s.io=true
	I1025 09:35:17.143548  203993 oci.go:103] Successfully created a docker volume default-k8s-diff-port-666079
	I1025 09:35:17.143640  203993 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-666079-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-666079 --entrypoint /usr/bin/test -v default-k8s-diff-port-666079:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1025 09:35:17.750870  203993 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-666079
	I1025 09:35:17.750925  203993 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:35:17.750945  203993 kic.go:194] Starting extracting preloaded images to volume ...
	I1025 09:35:17.751023  203993 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21796-2312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-666079:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	W1025 09:35:21.843135  200380 pod_ready.go:104] pod "coredns-66bc5c9577-vgz5x" is not "Ready", error: <nil>
	I1025 09:35:23.343278  200380 pod_ready.go:94] pod "coredns-66bc5c9577-vgz5x" is "Ready"
	I1025 09:35:23.343309  200380 pod_ready.go:86] duration metric: took 33.006542057s for pod "coredns-66bc5c9577-vgz5x" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:35:23.348335  200380 pod_ready.go:83] waiting for pod "etcd-embed-certs-173264" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:35:23.354920  200380 pod_ready.go:94] pod "etcd-embed-certs-173264" is "Ready"
	I1025 09:35:23.354947  200380 pod_ready.go:86] duration metric: took 6.579755ms for pod "etcd-embed-certs-173264" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:35:23.357667  200380 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-173264" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:35:23.365179  200380 pod_ready.go:94] pod "kube-apiserver-embed-certs-173264" is "Ready"
	I1025 09:35:23.365204  200380 pod_ready.go:86] duration metric: took 7.50359ms for pod "kube-apiserver-embed-certs-173264" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:35:23.368764  200380 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-173264" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:35:23.540850  200380 pod_ready.go:94] pod "kube-controller-manager-embed-certs-173264" is "Ready"
	I1025 09:35:23.540918  200380 pod_ready.go:86] duration metric: took 172.126162ms for pod "kube-controller-manager-embed-certs-173264" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:35:23.740893  200380 pod_ready.go:83] waiting for pod "kube-proxy-gwv98" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:35:24.141134  200380 pod_ready.go:94] pod "kube-proxy-gwv98" is "Ready"
	I1025 09:35:24.141159  200380 pod_ready.go:86] duration metric: took 400.235889ms for pod "kube-proxy-gwv98" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:35:24.341369  200380 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-173264" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:35:24.740202  200380 pod_ready.go:94] pod "kube-scheduler-embed-certs-173264" is "Ready"
	I1025 09:35:24.740233  200380 pod_ready.go:86] duration metric: took 398.838291ms for pod "kube-scheduler-embed-certs-173264" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:35:24.740246  200380 pod_ready.go:40] duration metric: took 34.408609097s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:35:24.791433  200380 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1025 09:35:24.794852  200380 out.go:179] * Done! kubectl is now configured to use "embed-certs-173264" cluster and "default" namespace by default
	I1025 09:35:22.187900  203993 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21796-2312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-666079:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.43682399s)
	I1025 09:35:22.187934  203993 kic.go:203] duration metric: took 4.43698679s to extract preloaded images to volume ...
	W1025 09:35:22.188083  203993 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1025 09:35:22.188204  203993 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 09:35:22.262941  203993 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-666079 --name default-k8s-diff-port-666079 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-666079 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-666079 --network default-k8s-diff-port-666079 --ip 192.168.76.2 --volume default-k8s-diff-port-666079:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1025 09:35:22.595829  203993 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-666079 --format={{.State.Running}}
	I1025 09:35:22.617102  203993 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-666079 --format={{.State.Status}}
	I1025 09:35:22.655541  203993 cli_runner.go:164] Run: docker exec default-k8s-diff-port-666079 stat /var/lib/dpkg/alternatives/iptables
	I1025 09:35:22.710682  203993 oci.go:144] the created container "default-k8s-diff-port-666079" has a running status.
	I1025 09:35:22.710708  203993 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21796-2312/.minikube/machines/default-k8s-diff-port-666079/id_rsa...
	I1025 09:35:23.117018  203993 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21796-2312/.minikube/machines/default-k8s-diff-port-666079/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 09:35:23.155804  203993 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-666079 --format={{.State.Status}}
	I1025 09:35:23.179628  203993 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 09:35:23.179647  203993 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-666079 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 09:35:23.236844  203993 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-666079 --format={{.State.Status}}
	I1025 09:35:23.260857  203993 machine.go:93] provisionDockerMachine start ...
	I1025 09:35:23.260957  203993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-666079
	I1025 09:35:23.286527  203993 main.go:141] libmachine: Using SSH client type: native
	I1025 09:35:23.286862  203993 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1025 09:35:23.286872  203993 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:35:23.287626  203993 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1025 09:35:26.433553  203993 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-666079
	
	I1025 09:35:26.433577  203993 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-666079"
	I1025 09:35:26.433697  203993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-666079
	I1025 09:35:26.451631  203993 main.go:141] libmachine: Using SSH client type: native
	I1025 09:35:26.451955  203993 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1025 09:35:26.451971  203993 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-666079 && echo "default-k8s-diff-port-666079" | sudo tee /etc/hostname
	I1025 09:35:26.615279  203993 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-666079
	
	I1025 09:35:26.615400  203993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-666079
	I1025 09:35:26.636584  203993 main.go:141] libmachine: Using SSH client type: native
	I1025 09:35:26.636928  203993 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1025 09:35:26.636954  203993 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-666079' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-666079/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-666079' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:35:26.786297  203993 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:35:26.786367  203993 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21796-2312/.minikube CaCertPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21796-2312/.minikube}
	I1025 09:35:26.786413  203993 ubuntu.go:190] setting up certificates
	I1025 09:35:26.786455  203993 provision.go:84] configureAuth start
	I1025 09:35:26.786568  203993 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-666079
	I1025 09:35:26.803806  203993 provision.go:143] copyHostCerts
	I1025 09:35:26.803881  203993 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-2312/.minikube/ca.pem, removing ...
	I1025 09:35:26.803894  203993 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-2312/.minikube/ca.pem
	I1025 09:35:26.803974  203993 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21796-2312/.minikube/ca.pem (1082 bytes)
	I1025 09:35:26.804079  203993 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-2312/.minikube/cert.pem, removing ...
	I1025 09:35:26.804088  203993 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-2312/.minikube/cert.pem
	I1025 09:35:26.804117  203993 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21796-2312/.minikube/cert.pem (1123 bytes)
	I1025 09:35:26.804213  203993 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-2312/.minikube/key.pem, removing ...
	I1025 09:35:26.804224  203993 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-2312/.minikube/key.pem
	I1025 09:35:26.804248  203993 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21796-2312/.minikube/key.pem (1679 bytes)
	I1025 09:35:26.804302  203993 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21796-2312/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-666079 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-666079 localhost minikube]
	I1025 09:35:27.281536  203993 provision.go:177] copyRemoteCerts
	I1025 09:35:27.281615  203993 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:35:27.281657  203993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-666079
	I1025 09:35:27.299201  203993 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/default-k8s-diff-port-666079/id_rsa Username:docker}
	I1025 09:35:27.406406  203993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 09:35:27.432457  203993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1025 09:35:27.449732  203993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 09:35:27.467224  203993 provision.go:87] duration metric: took 680.7321ms to configureAuth
	I1025 09:35:27.467252  203993 ubuntu.go:206] setting minikube options for container-runtime
	I1025 09:35:27.467500  203993 config.go:182] Loaded profile config "default-k8s-diff-port-666079": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:35:27.467658  203993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-666079
	I1025 09:35:27.485078  203993 main.go:141] libmachine: Using SSH client type: native
	I1025 09:35:27.485400  203993 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1025 09:35:27.485420  203993 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 09:35:27.828831  203993 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:35:27.828851  203993 machine.go:96] duration metric: took 4.567974893s to provisionDockerMachine
	I1025 09:35:27.828860  203993 client.go:171] duration metric: took 10.853765254s to LocalClient.Create
	I1025 09:35:27.828883  203993 start.go:167] duration metric: took 10.853842333s to libmachine.API.Create "default-k8s-diff-port-666079"
	I1025 09:35:27.828892  203993 start.go:293] postStartSetup for "default-k8s-diff-port-666079" (driver="docker")
	I1025 09:35:27.828902  203993 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:35:27.828967  203993 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:35:27.829006  203993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-666079
	I1025 09:35:27.847386  203993 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/default-k8s-diff-port-666079/id_rsa Username:docker}
	I1025 09:35:27.954424  203993 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:35:27.957890  203993 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 09:35:27.957924  203993 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 09:35:27.957937  203993 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-2312/.minikube/addons for local assets ...
	I1025 09:35:27.958039  203993 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-2312/.minikube/files for local assets ...
	I1025 09:35:27.958136  203993 filesync.go:149] local asset: /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem -> 41102.pem in /etc/ssl/certs
	I1025 09:35:27.958251  203993 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 09:35:27.966274  203993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem --> /etc/ssl/certs/41102.pem (1708 bytes)
	I1025 09:35:27.985493  203993 start.go:296] duration metric: took 156.587247ms for postStartSetup
	I1025 09:35:27.985865  203993 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-666079
	I1025 09:35:28.010877  203993 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/config.json ...
	I1025 09:35:28.011202  203993 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:35:28.011257  203993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-666079
	I1025 09:35:28.030749  203993 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/default-k8s-diff-port-666079/id_rsa Username:docker}
	I1025 09:35:28.135614  203993 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 09:35:28.140965  203993 start.go:128] duration metric: took 11.169625999s to createHost
	I1025 09:35:28.140988  203993 start.go:83] releasing machines lock for "default-k8s-diff-port-666079", held for 11.169772725s
	I1025 09:35:28.141058  203993 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-666079
	I1025 09:35:28.158845  203993 ssh_runner.go:195] Run: cat /version.json
	I1025 09:35:28.158905  203993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-666079
	I1025 09:35:28.158972  203993 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:35:28.159026  203993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-666079
	I1025 09:35:28.176938  203993 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/default-k8s-diff-port-666079/id_rsa Username:docker}
	I1025 09:35:28.180011  203993 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/default-k8s-diff-port-666079/id_rsa Username:docker}
	I1025 09:35:28.281749  203993 ssh_runner.go:195] Run: systemctl --version
	I1025 09:35:28.374612  203993 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:35:28.412295  203993 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:35:28.416957  203993 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:35:28.417031  203993 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:35:28.454106  203993 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1025 09:35:28.454127  203993 start.go:495] detecting cgroup driver to use...
	I1025 09:35:28.454160  203993 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 09:35:28.454213  203993 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:35:28.471570  203993 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:35:28.484603  203993 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:35:28.484673  203993 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:35:28.502821  203993 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:35:28.530299  203993 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:35:28.656161  203993 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:35:28.791083  203993 docker.go:234] disabling docker service ...
	I1025 09:35:28.791155  203993 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:35:28.814761  203993 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:35:28.828564  203993 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:35:28.953712  203993 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:35:29.081097  203993 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:35:29.095274  203993 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:35:29.109128  203993 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 09:35:29.109196  203993 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:35:29.118153  203993 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 09:35:29.118272  203993 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:35:29.128419  203993 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:35:29.137200  203993 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:35:29.145717  203993 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:35:29.153783  203993 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:35:29.162986  203993 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:35:29.177335  203993 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:35:29.186373  203993 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:35:29.193845  203993 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:35:29.201610  203993 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:35:29.318025  203993 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:35:29.463651  203993 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:35:29.463788  203993 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:35:29.467787  203993 start.go:563] Will wait 60s for crictl version
	I1025 09:35:29.467920  203993 ssh_runner.go:195] Run: which crictl
	I1025 09:35:29.471723  203993 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 09:35:29.504308  203993 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 09:35:29.504468  203993 ssh_runner.go:195] Run: crio --version
	I1025 09:35:29.546286  203993 ssh_runner.go:195] Run: crio --version
	I1025 09:35:29.579337  203993 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 09:35:29.582205  203993 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-666079 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:35:29.598849  203993 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1025 09:35:29.602695  203993 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:35:29.612887  203993 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-666079 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-666079 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 09:35:29.613015  203993 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:35:29.613075  203993 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:35:29.649838  203993 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:35:29.649864  203993 crio.go:433] Images already preloaded, skipping extraction
	I1025 09:35:29.649920  203993 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:35:29.675917  203993 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:35:29.675939  203993 cache_images.go:85] Images are preloaded, skipping loading
	I1025 09:35:29.675947  203993 kubeadm.go:934] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1025 09:35:29.676092  203993 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-666079 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-666079 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 09:35:29.676178  203993 ssh_runner.go:195] Run: crio config
	I1025 09:35:29.739933  203993 cni.go:84] Creating CNI manager for ""
	I1025 09:35:29.739961  203993 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:35:29.740009  203993 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 09:35:29.740042  203993 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-666079 NodeName:default-k8s-diff-port-666079 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:35:29.740242  203993 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-666079"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:35:29.740342  203993 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 09:35:29.748160  203993 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 09:35:29.748268  203993 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:35:29.755652  203993 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1025 09:35:29.768491  203993 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:35:29.781294  203993 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1025 09:35:29.794711  203993 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1025 09:35:29.799438  203993 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:35:29.809393  203993 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:35:29.929912  203993 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:35:29.948052  203993 certs.go:69] Setting up /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079 for IP: 192.168.76.2
	I1025 09:35:29.948089  203993 certs.go:195] generating shared ca certs ...
	I1025 09:35:29.948123  203993 certs.go:227] acquiring lock for ca certs: {Name:mke81a91ad41248e67f7acba9fb2e71e1c110e7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:35:29.948322  203993 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21796-2312/.minikube/ca.key
	I1025 09:35:29.948392  203993 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.key
	I1025 09:35:29.948405  203993 certs.go:257] generating profile certs ...
	I1025 09:35:29.948479  203993 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/client.key
	I1025 09:35:29.948497  203993 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/client.crt with IP's: []
	I1025 09:35:30.581675  203993 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/client.crt ...
	I1025 09:35:30.581713  203993 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/client.crt: {Name:mkd6f8f4eed87bbc3b8a62ed0863f4de58c2de6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:35:30.581921  203993 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/client.key ...
	I1025 09:35:30.581945  203993 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/client.key: {Name:mk2010ec0938e0114aafcd1480a556e177e64b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:35:30.582077  203993 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/apiserver.key.f342de6b
	I1025 09:35:30.582099  203993 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/apiserver.crt.f342de6b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1025 09:35:31.413066  203993 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/apiserver.crt.f342de6b ...
	I1025 09:35:31.413098  203993 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/apiserver.crt.f342de6b: {Name:mk297ed3268847965db21d2ec9fa2837a9b902e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:35:31.413311  203993 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/apiserver.key.f342de6b ...
	I1025 09:35:31.413327  203993 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/apiserver.key.f342de6b: {Name:mk70e34533dd755d2c65936b0b288cebda13ad2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:35:31.413418  203993 certs.go:382] copying /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/apiserver.crt.f342de6b -> /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/apiserver.crt
	I1025 09:35:31.413509  203993 certs.go:386] copying /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/apiserver.key.f342de6b -> /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/apiserver.key
	I1025 09:35:31.413574  203993 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/proxy-client.key
	I1025 09:35:31.413593  203993 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/proxy-client.crt with IP's: []
	I1025 09:35:32.176268  203993 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/proxy-client.crt ...
	I1025 09:35:32.176297  203993 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/proxy-client.crt: {Name:mk1a5138837f9b473aea352f22f36cc4e4d38e20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:35:32.176462  203993 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/proxy-client.key ...
	I1025 09:35:32.176478  203993 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/proxy-client.key: {Name:mkf8e3e716007e3b8dde8195a5217860b40b497e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:35:32.176650  203993 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/4110.pem (1338 bytes)
	W1025 09:35:32.176696  203993 certs.go:480] ignoring /home/jenkins/minikube-integration/21796-2312/.minikube/certs/4110_empty.pem, impossibly tiny 0 bytes
	I1025 09:35:32.176710  203993 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 09:35:32.176750  203993 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem (1082 bytes)
	I1025 09:35:32.176778  203993 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:35:32.176804  203993 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/key.pem (1679 bytes)
	I1025 09:35:32.176850  203993 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem (1708 bytes)
	I1025 09:35:32.177483  203993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:35:32.195073  203993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 09:35:32.212814  203993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:35:32.231737  203993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 09:35:32.250660  203993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1025 09:35:32.268661  203993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 09:35:32.287914  203993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:35:32.306500  203993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 09:35:32.325931  203993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:35:32.343925  203993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/certs/4110.pem --> /usr/share/ca-certificates/4110.pem (1338 bytes)
	I1025 09:35:32.362163  203993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem --> /usr/share/ca-certificates/41102.pem (1708 bytes)
	I1025 09:35:32.380544  203993 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:35:32.393854  203993 ssh_runner.go:195] Run: openssl version
	I1025 09:35:32.400358  203993 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:35:32.408967  203993 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:35:32.412928  203993 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:35:32.413034  203993 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:35:32.455731  203993 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:35:32.464228  203993 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4110.pem && ln -fs /usr/share/ca-certificates/4110.pem /etc/ssl/certs/4110.pem"
	I1025 09:35:32.473386  203993 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4110.pem
	I1025 09:35:32.477282  203993 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 08:37 /usr/share/ca-certificates/4110.pem
	I1025 09:35:32.477346  203993 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4110.pem
	I1025 09:35:32.518955  203993 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4110.pem /etc/ssl/certs/51391683.0"
	I1025 09:35:32.527537  203993 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41102.pem && ln -fs /usr/share/ca-certificates/41102.pem /etc/ssl/certs/41102.pem"
	I1025 09:35:32.536076  203993 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41102.pem
	I1025 09:35:32.539852  203993 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 08:37 /usr/share/ca-certificates/41102.pem
	I1025 09:35:32.539953  203993 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41102.pem
	I1025 09:35:32.581343  203993 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41102.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 09:35:32.589727  203993 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:35:32.593250  203993 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 09:35:32.593319  203993 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-666079 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-666079 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:35:32.593402  203993 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:35:32.593475  203993 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:35:32.619622  203993 cri.go:89] found id: ""
	I1025 09:35:32.619762  203993 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 09:35:32.627520  203993 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 09:35:32.635407  203993 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1025 09:35:32.635472  203993 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 09:35:32.644117  203993 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 09:35:32.644139  203993 kubeadm.go:157] found existing configuration files:
	
	I1025 09:35:32.644229  203993 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1025 09:35:32.653027  203993 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 09:35:32.653131  203993 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 09:35:32.660535  203993 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1025 09:35:32.668031  203993 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 09:35:32.668114  203993 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 09:35:32.675425  203993 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1025 09:35:32.683117  203993 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 09:35:32.683178  203993 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 09:35:32.690358  203993 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1025 09:35:32.698251  203993 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 09:35:32.698326  203993 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 09:35:32.706173  203993 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 09:35:32.747409  203993 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1025 09:35:32.747737  203993 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 09:35:32.773884  203993 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1025 09:35:32.774061  203993 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1025 09:35:32.774114  203993 kubeadm.go:318] OS: Linux
	I1025 09:35:32.774189  203993 kubeadm.go:318] CGROUPS_CPU: enabled
	I1025 09:35:32.774250  203993 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1025 09:35:32.774306  203993 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1025 09:35:32.774373  203993 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1025 09:35:32.774436  203993 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1025 09:35:32.774500  203993 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1025 09:35:32.774553  203993 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1025 09:35:32.774617  203993 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1025 09:35:32.774672  203993 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1025 09:35:32.856395  203993 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 09:35:32.856513  203993 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 09:35:32.856614  203993 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 09:35:32.868500  203993 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 09:35:32.874262  203993 out.go:252]   - Generating certificates and keys ...
	I1025 09:35:32.874358  203993 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 09:35:32.874435  203993 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 09:35:33.140121  203993 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 09:35:33.611471  203993 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 09:35:34.705009  203993 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 09:35:35.223586  203993 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 09:35:35.423755  203993 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 09:35:35.423916  203993 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-666079 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1025 09:35:35.862633  203993 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 09:35:35.863026  203993 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-666079 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1025 09:35:36.627822  203993 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	
	
	==> CRI-O <==
	Oct 25 09:35:20 embed-certs-173264 crio[652]: time="2025-10-25T09:35:20.385626805Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=1df58a46-7d3d-4fe7-ac88-f0830ba5b77c name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:35:20 embed-certs-173264 crio[652]: time="2025-10-25T09:35:20.387075511Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=2b0b3e7b-c626-4a71-aba0-bec8450b8148 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:35:20 embed-certs-173264 crio[652]: time="2025-10-25T09:35:20.387190352Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:35:20 embed-certs-173264 crio[652]: time="2025-10-25T09:35:20.397243462Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:35:20 embed-certs-173264 crio[652]: time="2025-10-25T09:35:20.397438976Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/75a434fe13aa89bcaa3f46c899bee7a4cfcc44e284a72f2f3f21627356062229/merged/etc/passwd: no such file or directory"
	Oct 25 09:35:20 embed-certs-173264 crio[652]: time="2025-10-25T09:35:20.397470049Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/75a434fe13aa89bcaa3f46c899bee7a4cfcc44e284a72f2f3f21627356062229/merged/etc/group: no such file or directory"
	Oct 25 09:35:20 embed-certs-173264 crio[652]: time="2025-10-25T09:35:20.397797314Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:35:20 embed-certs-173264 crio[652]: time="2025-10-25T09:35:20.431733419Z" level=info msg="Created container 554118d5ba9b888eff74911aeb9bc49200cf2e408aa2932423cd99c6fddc0070: kube-system/storage-provisioner/storage-provisioner" id=2b0b3e7b-c626-4a71-aba0-bec8450b8148 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:35:20 embed-certs-173264 crio[652]: time="2025-10-25T09:35:20.432810265Z" level=info msg="Starting container: 554118d5ba9b888eff74911aeb9bc49200cf2e408aa2932423cd99c6fddc0070" id=840edb25-3bfe-4068-a13b-11acf9c4875c name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:35:20 embed-certs-173264 crio[652]: time="2025-10-25T09:35:20.449024023Z" level=info msg="Started container" PID=1639 containerID=554118d5ba9b888eff74911aeb9bc49200cf2e408aa2932423cd99c6fddc0070 description=kube-system/storage-provisioner/storage-provisioner id=840edb25-3bfe-4068-a13b-11acf9c4875c name=/runtime.v1.RuntimeService/StartContainer sandboxID=9dfd97285022d098c5fa00e972d7558c4efcd73644e5cbcf5f6d46ec0791451a
	Oct 25 09:35:29 embed-certs-173264 crio[652]: time="2025-10-25T09:35:29.828696385Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 09:35:29 embed-certs-173264 crio[652]: time="2025-10-25T09:35:29.83355552Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 09:35:29 embed-certs-173264 crio[652]: time="2025-10-25T09:35:29.833941223Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 09:35:29 embed-certs-173264 crio[652]: time="2025-10-25T09:35:29.834133397Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 09:35:29 embed-certs-173264 crio[652]: time="2025-10-25T09:35:29.843789931Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 09:35:29 embed-certs-173264 crio[652]: time="2025-10-25T09:35:29.843938594Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 09:35:29 embed-certs-173264 crio[652]: time="2025-10-25T09:35:29.84401016Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 09:35:29 embed-certs-173264 crio[652]: time="2025-10-25T09:35:29.848825208Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 09:35:29 embed-certs-173264 crio[652]: time="2025-10-25T09:35:29.848864331Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 09:35:29 embed-certs-173264 crio[652]: time="2025-10-25T09:35:29.848886509Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 09:35:29 embed-certs-173264 crio[652]: time="2025-10-25T09:35:29.852031171Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 09:35:29 embed-certs-173264 crio[652]: time="2025-10-25T09:35:29.85217843Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 09:35:29 embed-certs-173264 crio[652]: time="2025-10-25T09:35:29.85225898Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 09:35:29 embed-certs-173264 crio[652]: time="2025-10-25T09:35:29.856006808Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 09:35:29 embed-certs-173264 crio[652]: time="2025-10-25T09:35:29.8561589Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	554118d5ba9b8       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           19 seconds ago      Running             storage-provisioner         2                   9dfd97285022d       storage-provisioner                          kube-system
	d635ff4cda826       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           25 seconds ago      Exited              dashboard-metrics-scraper   2                   9992589c39b0f       dashboard-metrics-scraper-6ffb444bf9-h6zck   kubernetes-dashboard
	bb24bf5fb5b2b       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   37 seconds ago      Running             kubernetes-dashboard        0                   4874621ec6a99       kubernetes-dashboard-855c9754f9-sj8dq        kubernetes-dashboard
	b3142984c1fee       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           50 seconds ago      Running             coredns                     1                   e44d3bb532542       coredns-66bc5c9577-vgz5x                     kube-system
	7dc2d09da875d       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           50 seconds ago      Running             busybox                     1                   923bde58161a8       busybox                                      default
	1cfe71fd1fe7a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           50 seconds ago      Exited              storage-provisioner         1                   9dfd97285022d       storage-provisioner                          kube-system
	39a267437a269       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           50 seconds ago      Running             kube-proxy                  1                   1f387f1e79854       kube-proxy-gwv98                             kube-system
	0b944f32fb8b5       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           50 seconds ago      Running             kindnet-cni                 1                   f8bcd9fd6305e       kindnet-862lz                                kube-system
	ddedd8b79fda0       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           56 seconds ago      Running             etcd                        1                   cc27e2829ddb0       etcd-embed-certs-173264                      kube-system
	d6ad6127ca83d       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           56 seconds ago      Running             kube-controller-manager     1                   c294c59dc6881       kube-controller-manager-embed-certs-173264   kube-system
	8f686a2912e6c       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           56 seconds ago      Running             kube-apiserver              1                   cb61274475e44       kube-apiserver-embed-certs-173264            kube-system
	0bd9ad4a66788       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           56 seconds ago      Running             kube-scheduler              1                   8747767de132d       kube-scheduler-embed-certs-173264            kube-system
	
	
	==> coredns [b3142984c1fee9ffc110ab096d6a0855ca60948dfef193f2d783f2d20bd9886e] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42199 - 24989 "HINFO IN 5904666286340328563.1330875012800359847. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014940983s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-173264
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-173264
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3de5ce1075e776fa55a26fe8396669cc53a4373
	                    minikube.k8s.io/name=embed-certs-173264
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_33_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:33:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-173264
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:35:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:35:19 +0000   Sat, 25 Oct 2025 09:33:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:35:19 +0000   Sat, 25 Oct 2025 09:33:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:35:19 +0000   Sat, 25 Oct 2025 09:33:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:35:19 +0000   Sat, 25 Oct 2025 09:34:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-173264
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                fd815287-48cc-43e1-a791-5bcdc882763d
	  Boot ID:                    89aa6167-e700-462e-8969-7739a9f33dc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 coredns-66bc5c9577-vgz5x                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m13s
	  kube-system                 etcd-embed-certs-173264                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m19s
	  kube-system                 kindnet-862lz                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m13s
	  kube-system                 kube-apiserver-embed-certs-173264             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m18s
	  kube-system                 kube-controller-manager-embed-certs-173264    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m18s
	  kube-system                 kube-proxy-gwv98                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m13s
	  kube-system                 kube-scheduler-embed-certs-173264             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m18s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m12s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-h6zck    0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-sj8dq         0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m11s                  kube-proxy       
	  Normal   Starting                 50s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m28s (x8 over 2m28s)  kubelet          Node embed-certs-173264 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m28s (x8 over 2m28s)  kubelet          Node embed-certs-173264 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m28s (x8 over 2m28s)  kubelet          Node embed-certs-173264 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m18s                  kubelet          Node embed-certs-173264 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m18s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m18s                  kubelet          Node embed-certs-173264 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m18s                  kubelet          Node embed-certs-173264 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m18s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m14s                  node-controller  Node embed-certs-173264 event: Registered Node embed-certs-173264 in Controller
	  Normal   NodeReady                92s                    kubelet          Node embed-certs-173264 status is now: NodeReady
	  Normal   Starting                 57s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 57s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  57s (x8 over 57s)      kubelet          Node embed-certs-173264 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    57s (x8 over 57s)      kubelet          Node embed-certs-173264 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     57s (x8 over 57s)      kubelet          Node embed-certs-173264 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           49s                    node-controller  Node embed-certs-173264 event: Registered Node embed-certs-173264 in Controller
	
	
	==> dmesg <==
	[Oct25 09:11] overlayfs: idmapped layers are currently not supported
	[Oct25 09:13] overlayfs: idmapped layers are currently not supported
	[ +18.632418] overlayfs: idmapped layers are currently not supported
	[ +13.271347] overlayfs: idmapped layers are currently not supported
	[Oct25 09:14] overlayfs: idmapped layers are currently not supported
	[Oct25 09:15] overlayfs: idmapped layers are currently not supported
	[ +40.253798] overlayfs: idmapped layers are currently not supported
	[Oct25 09:16] overlayfs: idmapped layers are currently not supported
	[Oct25 09:17] overlayfs: idmapped layers are currently not supported
	[Oct25 09:18] overlayfs: idmapped layers are currently not supported
	[  +9.191019] hrtimer: interrupt took 13462946 ns
	[Oct25 09:20] overlayfs: idmapped layers are currently not supported
	[Oct25 09:22] overlayfs: idmapped layers are currently not supported
	[Oct25 09:23] overlayfs: idmapped layers are currently not supported
	[Oct25 09:24] overlayfs: idmapped layers are currently not supported
	[Oct25 09:28] overlayfs: idmapped layers are currently not supported
	[ +37.283444] overlayfs: idmapped layers are currently not supported
	[Oct25 09:29] overlayfs: idmapped layers are currently not supported
	[ +38.328802] overlayfs: idmapped layers are currently not supported
	[Oct25 09:30] overlayfs: idmapped layers are currently not supported
	[Oct25 09:31] overlayfs: idmapped layers are currently not supported
	[Oct25 09:32] overlayfs: idmapped layers are currently not supported
	[Oct25 09:33] overlayfs: idmapped layers are currently not supported
	[Oct25 09:34] overlayfs: idmapped layers are currently not supported
	[ +28.289706] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [ddedd8b79fda0b2dea509f8022b5abaeb8024fdc8737fab8024dee99c98d3b19] <==
	{"level":"warn","ts":"2025-10-25T09:34:47.078928Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:47.090918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:47.111298Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:47.129634Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:47.152846Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:47.168762Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:47.189249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:47.202449Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:47.231963Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:47.252764Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:47.269874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:47.289893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:47.312297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:47.338672Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:47.343636Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:47.362381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:47.389086Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:47.436837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:47.447534Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:47.489943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:47.490813Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:47.538178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:47.542737Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:47.559667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:47.665802Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44016","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:35:40 up  1:18,  0 user,  load average: 4.97, 3.89, 3.02
	Linux embed-certs-173264 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0b944f32fb8b576132d82481361923d19fdefbaee98287df71455fe148002ac8] <==
	I1025 09:34:49.629649       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 09:34:49.630140       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1025 09:34:49.630320       1 main.go:148] setting mtu 1500 for CNI 
	I1025 09:34:49.630368       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 09:34:49.630400       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T09:34:49Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 09:34:49.915286       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 09:34:49.915320       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 09:34:49.915420       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 09:34:49.916676       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1025 09:35:19.826120       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1025 09:35:19.916792       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1025 09:35:19.916903       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1025 09:35:19.917021       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1025 09:35:21.115665       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 09:35:21.115789       1 metrics.go:72] Registering metrics
	I1025 09:35:21.115856       1 controller.go:711] "Syncing nftables rules"
	I1025 09:35:29.828424       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 09:35:29.828461       1 main.go:301] handling current node
	I1025 09:35:39.834134       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 09:35:39.834165       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8f686a2912e6c0a8d6e4d5311cba470140c0c77f3d59d36367a840a7e2c18a5b] <==
	I1025 09:34:48.725548       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 09:34:48.725828       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	E1025 09:34:48.747460       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1025 09:34:48.752580       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:34:48.756442       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1025 09:34:48.756487       1 policy_source.go:240] refreshing policies
	I1025 09:34:48.800711       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1025 09:34:48.800927       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 09:34:48.800967       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1025 09:34:48.800980       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 09:34:48.801555       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1025 09:34:48.801567       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1025 09:34:48.813161       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1025 09:34:48.831211       1 cache.go:39] Caches are synced for autoregister controller
	I1025 09:34:49.102719       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 09:34:49.308444       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 09:34:49.312026       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 09:34:49.392945       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 09:34:49.463601       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 09:34:49.509846       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 09:34:49.768131       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.161.202"}
	I1025 09:34:49.851249       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.1.106"}
	I1025 09:34:51.900253       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 09:34:52.279192       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 09:34:52.380899       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [d6ad6127ca83d1792eb0f03aca451cdd0a78c05c4baaecc9fd3ec902ddd40c88] <==
	I1025 09:34:51.902316       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1025 09:34:51.905727       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1025 09:34:51.907439       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1025 09:34:51.908570       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:34:51.915395       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:34:51.922119       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1025 09:34:51.922196       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1025 09:34:51.922272       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1025 09:34:51.922339       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1025 09:34:51.922437       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-173264"
	I1025 09:34:51.922489       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1025 09:34:51.922525       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1025 09:34:51.922553       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1025 09:34:51.922596       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1025 09:34:51.922181       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:34:51.922686       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 09:34:51.922716       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 09:34:51.924213       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1025 09:34:51.925384       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1025 09:34:51.926577       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1025 09:34:51.926650       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1025 09:34:51.926661       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1025 09:34:51.930135       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1025 09:34:51.932600       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1025 09:34:52.285177       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [39a267437a269c4557082094c09ced55f8c3e472342c82d7fd03ae3a25b0f17e] <==
	I1025 09:34:49.827031       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:34:49.950808       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:34:50.090308       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:34:50.111185       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1025 09:34:50.111369       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:34:50.166681       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:34:50.166810       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:34:50.186746       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:34:50.187154       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:34:50.188099       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:34:50.191696       1 config.go:200] "Starting service config controller"
	I1025 09:34:50.200030       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:34:50.200092       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:34:50.200098       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:34:50.200113       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:34:50.200117       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:34:50.204467       1 config.go:309] "Starting node config controller"
	I1025 09:34:50.204497       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:34:50.204506       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:34:50.300297       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 09:34:50.302137       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 09:34:50.302180       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [0bd9ad4a667885f7374b118b9cdffe51d851fae7ec99302cc3b3126ee7b47b5a] <==
	I1025 09:34:47.473769       1 serving.go:386] Generated self-signed cert in-memory
	W1025 09:34:48.460524       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 09:34:48.460574       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 09:34:48.460586       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 09:34:48.460593       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 09:34:48.575229       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 09:34:48.575264       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:34:48.577663       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 09:34:48.577777       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:34:48.577802       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:34:48.577823       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1025 09:34:48.678309       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": RBAC: [role.rbac.authorization.k8s.io \"extension-apiserver-authentication-reader\" not found, role.rbac.authorization.k8s.io \"system::leader-locking-kube-scheduler\" not found]" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1025 09:34:49.581601       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 09:34:52 embed-certs-173264 kubelet[779]: E1025 09:34:52.327850     779 projected.go:196] Error preparing data for projected volume kube-api-access-ml4l7 for pod kubernetes-dashboard/kubernetes-dashboard-855c9754f9-sj8dq: configmap "kube-root-ca.crt" not found
	Oct 25 09:34:52 embed-certs-173264 kubelet[779]: E1025 09:34:52.328988     779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/25fa6987-f91d-42a5-8ef0-848aca718f8c-kube-api-access-ml4l7 podName:25fa6987-f91d-42a5-8ef0-848aca718f8c nodeName:}" failed. No retries permitted until 2025-10-25 09:34:52.828295268 +0000 UTC m=+9.922960806 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ml4l7" (UniqueName: "kubernetes.io/projected/25fa6987-f91d-42a5-8ef0-848aca718f8c-kube-api-access-ml4l7") pod "kubernetes-dashboard-855c9754f9-sj8dq" (UID: "25fa6987-f91d-42a5-8ef0-848aca718f8c") : configmap "kube-root-ca.crt" not found
	Oct 25 09:34:52 embed-certs-173264 kubelet[779]: E1025 09:34:52.329217     779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5c1980d7-e996-4bf9-b950-107c19ab5941-kube-api-access-ggv52 podName:5c1980d7-e996-4bf9-b950-107c19ab5941 nodeName:}" failed. No retries permitted until 2025-10-25 09:34:52.829174565 +0000 UTC m=+9.923840103 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ggv52" (UniqueName: "kubernetes.io/projected/5c1980d7-e996-4bf9-b950-107c19ab5941-kube-api-access-ggv52") pod "dashboard-metrics-scraper-6ffb444bf9-h6zck" (UID: "5c1980d7-e996-4bf9-b950-107c19ab5941") : configmap "kube-root-ca.crt" not found
	Oct 25 09:34:53 embed-certs-173264 kubelet[779]: I1025 09:34:53.058282     779 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 25 09:34:53 embed-certs-173264 kubelet[779]: W1025 09:34:53.172231     779 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/7ab6ed1b9ea6708ddac2c85d334494b29a2d55d0bac7a2069e7f45087d3443ef/crio-9992589c39b0fa029903d806610e601102eea590f03e7dc663d777dbd3088a0f WatchSource:0}: Error finding container 9992589c39b0fa029903d806610e601102eea590f03e7dc663d777dbd3088a0f: Status 404 returned error can't find the container with id 9992589c39b0fa029903d806610e601102eea590f03e7dc663d777dbd3088a0f
	Oct 25 09:34:53 embed-certs-173264 kubelet[779]: W1025 09:34:53.179607     779 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/7ab6ed1b9ea6708ddac2c85d334494b29a2d55d0bac7a2069e7f45087d3443ef/crio-4874621ec6a99b3817ba8ced99c102f65fea0fadea6e133090b65bf43145da64 WatchSource:0}: Error finding container 4874621ec6a99b3817ba8ced99c102f65fea0fadea6e133090b65bf43145da64: Status 404 returned error can't find the container with id 4874621ec6a99b3817ba8ced99c102f65fea0fadea6e133090b65bf43145da64
	Oct 25 09:34:58 embed-certs-173264 kubelet[779]: I1025 09:34:58.216236     779 scope.go:117] "RemoveContainer" containerID="fdc0c4634f373a9d3eb8644b7f8f1367a73154717e63aeb3712718f163fd9fa7"
	Oct 25 09:34:59 embed-certs-173264 kubelet[779]: I1025 09:34:59.224409     779 scope.go:117] "RemoveContainer" containerID="c635b194523d6bb46b03d6567debf06efef47ab7249fcab83ccf4e5a6b461d3d"
	Oct 25 09:34:59 embed-certs-173264 kubelet[779]: E1025 09:34:59.224570     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-h6zck_kubernetes-dashboard(5c1980d7-e996-4bf9-b950-107c19ab5941)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-h6zck" podUID="5c1980d7-e996-4bf9-b950-107c19ab5941"
	Oct 25 09:34:59 embed-certs-173264 kubelet[779]: I1025 09:34:59.224727     779 scope.go:117] "RemoveContainer" containerID="fdc0c4634f373a9d3eb8644b7f8f1367a73154717e63aeb3712718f163fd9fa7"
	Oct 25 09:35:00 embed-certs-173264 kubelet[779]: I1025 09:35:00.323401     779 scope.go:117] "RemoveContainer" containerID="c635b194523d6bb46b03d6567debf06efef47ab7249fcab83ccf4e5a6b461d3d"
	Oct 25 09:35:00 embed-certs-173264 kubelet[779]: E1025 09:35:00.323598     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-h6zck_kubernetes-dashboard(5c1980d7-e996-4bf9-b950-107c19ab5941)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-h6zck" podUID="5c1980d7-e996-4bf9-b950-107c19ab5941"
	Oct 25 09:35:03 embed-certs-173264 kubelet[779]: I1025 09:35:03.104972     779 scope.go:117] "RemoveContainer" containerID="c635b194523d6bb46b03d6567debf06efef47ab7249fcab83ccf4e5a6b461d3d"
	Oct 25 09:35:03 embed-certs-173264 kubelet[779]: E1025 09:35:03.105160     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-h6zck_kubernetes-dashboard(5c1980d7-e996-4bf9-b950-107c19ab5941)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-h6zck" podUID="5c1980d7-e996-4bf9-b950-107c19ab5941"
	Oct 25 09:35:15 embed-certs-173264 kubelet[779]: I1025 09:35:15.079122     779 scope.go:117] "RemoveContainer" containerID="c635b194523d6bb46b03d6567debf06efef47ab7249fcab83ccf4e5a6b461d3d"
	Oct 25 09:35:15 embed-certs-173264 kubelet[779]: I1025 09:35:15.365503     779 scope.go:117] "RemoveContainer" containerID="c635b194523d6bb46b03d6567debf06efef47ab7249fcab83ccf4e5a6b461d3d"
	Oct 25 09:35:15 embed-certs-173264 kubelet[779]: I1025 09:35:15.365735     779 scope.go:117] "RemoveContainer" containerID="d635ff4cda8268a9abc07881a8b41b2e3801e381e546026d0ae96f16d82bfcd1"
	Oct 25 09:35:15 embed-certs-173264 kubelet[779]: E1025 09:35:15.365917     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-h6zck_kubernetes-dashboard(5c1980d7-e996-4bf9-b950-107c19ab5941)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-h6zck" podUID="5c1980d7-e996-4bf9-b950-107c19ab5941"
	Oct 25 09:35:15 embed-certs-173264 kubelet[779]: I1025 09:35:15.386348     779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-sj8dq" podStartSLOduration=13.614154313 podStartE2EDuration="23.384707201s" podCreationTimestamp="2025-10-25 09:34:52 +0000 UTC" firstStartedPulling="2025-10-25 09:34:53.183312701 +0000 UTC m=+10.277978239" lastFinishedPulling="2025-10-25 09:35:02.953865581 +0000 UTC m=+20.048531127" observedRunningTime="2025-10-25 09:35:03.350599555 +0000 UTC m=+20.445265101" watchObservedRunningTime="2025-10-25 09:35:15.384707201 +0000 UTC m=+32.479372755"
	Oct 25 09:35:20 embed-certs-173264 kubelet[779]: I1025 09:35:20.383281     779 scope.go:117] "RemoveContainer" containerID="1cfe71fd1fe7a1b6bea21b990d2a3dbcc5dd1b17f294993c302c09956f95be67"
	Oct 25 09:35:23 embed-certs-173264 kubelet[779]: I1025 09:35:23.107772     779 scope.go:117] "RemoveContainer" containerID="d635ff4cda8268a9abc07881a8b41b2e3801e381e546026d0ae96f16d82bfcd1"
	Oct 25 09:35:23 embed-certs-173264 kubelet[779]: E1025 09:35:23.108237     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-h6zck_kubernetes-dashboard(5c1980d7-e996-4bf9-b950-107c19ab5941)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-h6zck" podUID="5c1980d7-e996-4bf9-b950-107c19ab5941"
	Oct 25 09:35:37 embed-certs-173264 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 09:35:37 embed-certs-173264 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 09:35:37 embed-certs-173264 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [bb24bf5fb5b2b235640af84b0d07ce607dc3657b1f1f6569aef374242880b7fb] <==
	2025/10/25 09:35:03 Using namespace: kubernetes-dashboard
	2025/10/25 09:35:03 Using in-cluster config to connect to apiserver
	2025/10/25 09:35:03 Using secret token for csrf signing
	2025/10/25 09:35:03 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/25 09:35:03 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/25 09:35:03 Successful initial request to the apiserver, version: v1.34.1
	2025/10/25 09:35:03 Generating JWE encryption key
	2025/10/25 09:35:03 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/25 09:35:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/25 09:35:04 Initializing JWE encryption key from synchronized object
	2025/10/25 09:35:04 Creating in-cluster Sidecar client
	2025/10/25 09:35:04 Serving insecurely on HTTP port: 9090
	2025/10/25 09:35:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 09:35:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 09:35:03 Starting overwatch
	
	
	==> storage-provisioner [1cfe71fd1fe7a1b6bea21b990d2a3dbcc5dd1b17f294993c302c09956f95be67] <==
	I1025 09:34:49.872533       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1025 09:35:19.881533       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [554118d5ba9b888eff74911aeb9bc49200cf2e408aa2932423cd99c6fddc0070] <==
	I1025 09:35:20.474373       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 09:35:20.491911       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 09:35:20.492055       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1025 09:35:20.495319       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:35:23.950265       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:35:28.213843       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:35:31.812836       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:35:34.867424       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:35:37.890502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:35:37.903028       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 09:35:37.906484       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 09:35:37.919777       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-173264_3dc15f09-793c-4197-8ad1-b3e5f1085083!
	I1025 09:35:37.907030       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f18b8aca-cedc-4b01-aff4-d043bcc5db0c", APIVersion:"v1", ResourceVersion:"686", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-173264_3dc15f09-793c-4197-8ad1-b3e5f1085083 became leader
	W1025 09:35:37.932982       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:35:37.954513       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 09:35:38.020785       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-173264_3dc15f09-793c-4197-8ad1-b3e5f1085083!
	W1025 09:35:39.969161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:35:39.974432       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-173264 -n embed-certs-173264
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-173264 -n embed-certs-173264: exit status 2 (393.897681ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-173264 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-173264
helpers_test.go:243: (dbg) docker inspect embed-certs-173264:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7ab6ed1b9ea6708ddac2c85d334494b29a2d55d0bac7a2069e7f45087d3443ef",
	        "Created": "2025-10-25T09:32:48.526873954Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 200507,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:34:36.189440371Z",
	            "FinishedAt": "2025-10-25T09:34:35.250036912Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/7ab6ed1b9ea6708ddac2c85d334494b29a2d55d0bac7a2069e7f45087d3443ef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7ab6ed1b9ea6708ddac2c85d334494b29a2d55d0bac7a2069e7f45087d3443ef/hostname",
	        "HostsPath": "/var/lib/docker/containers/7ab6ed1b9ea6708ddac2c85d334494b29a2d55d0bac7a2069e7f45087d3443ef/hosts",
	        "LogPath": "/var/lib/docker/containers/7ab6ed1b9ea6708ddac2c85d334494b29a2d55d0bac7a2069e7f45087d3443ef/7ab6ed1b9ea6708ddac2c85d334494b29a2d55d0bac7a2069e7f45087d3443ef-json.log",
	        "Name": "/embed-certs-173264",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-173264:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-173264",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7ab6ed1b9ea6708ddac2c85d334494b29a2d55d0bac7a2069e7f45087d3443ef",
	                "LowerDir": "/var/lib/docker/overlay2/f31af6c3318ffc600cbea3cfd23719cc69a1f1792d31e48077fe84ae405b9fc8-init/diff:/var/lib/docker/overlay2/cef79fc16ca3e257f9c9390d34fae091a7f7fa219913b2606caf1922ea50ed93/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f31af6c3318ffc600cbea3cfd23719cc69a1f1792d31e48077fe84ae405b9fc8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f31af6c3318ffc600cbea3cfd23719cc69a1f1792d31e48077fe84ae405b9fc8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f31af6c3318ffc600cbea3cfd23719cc69a1f1792d31e48077fe84ae405b9fc8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-173264",
	                "Source": "/var/lib/docker/volumes/embed-certs-173264/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-173264",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-173264",
	                "name.minikube.sigs.k8s.io": "embed-certs-173264",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b28ae31e6379d7040dec40b9fec7ae1982dea4cc23e4da745f1b3db1f8133312",
	            "SandboxKey": "/var/run/docker/netns/b28ae31e6379",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-173264": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fa:33:5f:c9:b8:df",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2d181aa3ece229a97886c4873dbb8eca8797c23a56c68ee43959cebc56f78ff8",
	                    "EndpointID": "610f47692ebd9dc9591e9dc4c7087b8ff2a404d909f95a33f967b6fc7572cb8b",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-173264",
	                        "7ab6ed1b9ea6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-173264 -n embed-certs-173264
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-173264 -n embed-certs-173264: exit status 2 (498.287836ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-173264 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-173264 logs -n 25: (1.862208919s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-881642 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-881642       │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │ 25 Oct 25 09:32 UTC │
	│ image   │ old-k8s-version-881642 image list --format=json                                                                                                                                                                                               │ old-k8s-version-881642       │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:32 UTC │
	│ pause   │ -p old-k8s-version-881642 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-881642       │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │                     │
	│ start   │ -p cert-expiration-440252 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-440252       │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:32 UTC │
	│ delete  │ -p old-k8s-version-881642                                                                                                                                                                                                                     │ old-k8s-version-881642       │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:32 UTC │
	│ delete  │ -p old-k8s-version-881642                                                                                                                                                                                                                     │ old-k8s-version-881642       │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:32 UTC │
	│ start   │ -p no-preload-179869 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-179869            │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:33 UTC │
	│ delete  │ -p cert-expiration-440252                                                                                                                                                                                                                     │ cert-expiration-440252       │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:32 UTC │
	│ start   │ -p embed-certs-173264 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-173264           │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:34 UTC │
	│ addons  │ enable metrics-server -p no-preload-179869 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-179869            │ jenkins │ v1.37.0 │ 25 Oct 25 09:33 UTC │                     │
	│ stop    │ -p no-preload-179869 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-179869            │ jenkins │ v1.37.0 │ 25 Oct 25 09:33 UTC │ 25 Oct 25 09:34 UTC │
	│ addons  │ enable dashboard -p no-preload-179869 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-179869            │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │ 25 Oct 25 09:34 UTC │
	│ start   │ -p no-preload-179869 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-179869            │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │ 25 Oct 25 09:34 UTC │
	│ addons  │ enable metrics-server -p embed-certs-173264 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-173264           │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │                     │
	│ stop    │ -p embed-certs-173264 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-173264           │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │ 25 Oct 25 09:34 UTC │
	│ addons  │ enable dashboard -p embed-certs-173264 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-173264           │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │ 25 Oct 25 09:34 UTC │
	│ start   │ -p embed-certs-173264 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-173264           │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │ 25 Oct 25 09:35 UTC │
	│ image   │ no-preload-179869 image list --format=json                                                                                                                                                                                                    │ no-preload-179869            │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
	│ pause   │ -p no-preload-179869 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-179869            │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │                     │
	│ delete  │ -p no-preload-179869                                                                                                                                                                                                                          │ no-preload-179869            │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
	│ delete  │ -p no-preload-179869                                                                                                                                                                                                                          │ no-preload-179869            │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
	│ delete  │ -p disable-driver-mounts-901717                                                                                                                                                                                                               │ disable-driver-mounts-901717 │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
	│ start   │ -p default-k8s-diff-port-666079 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-666079 │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │                     │
	│ image   │ embed-certs-173264 image list --format=json                                                                                                                                                                                                   │ embed-certs-173264           │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
	│ pause   │ -p embed-certs-173264 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-173264           │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:35:16
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:35:16.734589  203993 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:35:16.734776  203993 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:35:16.734803  203993 out.go:374] Setting ErrFile to fd 2...
	I1025 09:35:16.734820  203993 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:35:16.735108  203993 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-2312/.minikube/bin
	I1025 09:35:16.735562  203993 out.go:368] Setting JSON to false
	I1025 09:35:16.736585  203993 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4668,"bootTime":1761380249,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1025 09:35:16.736683  203993 start.go:141] virtualization:  
	I1025 09:35:16.740570  203993 out.go:179] * [default-k8s-diff-port-666079] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 09:35:16.744829  203993 out.go:179]   - MINIKUBE_LOCATION=21796
	I1025 09:35:16.744840  203993 notify.go:220] Checking for updates...
	I1025 09:35:16.748196  203993 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:35:16.751439  203993 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21796-2312/kubeconfig
	I1025 09:35:16.754537  203993 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-2312/.minikube
	I1025 09:35:16.757681  203993 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 09:35:16.760626  203993 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:35:16.764118  203993 config.go:182] Loaded profile config "embed-certs-173264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:35:16.764225  203993 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:35:16.792364  203993 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 09:35:16.792505  203993 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:35:16.865818  203993 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 09:35:16.855660379 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:35:16.865936  203993 docker.go:318] overlay module found
	I1025 09:35:16.869309  203993 out.go:179] * Using the docker driver based on user configuration
	I1025 09:35:16.872296  203993 start.go:305] selected driver: docker
	I1025 09:35:16.872320  203993 start.go:925] validating driver "docker" against <nil>
	I1025 09:35:16.872335  203993 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:35:16.873123  203993 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:35:16.932413  203993 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 09:35:16.923369159 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:35:16.932567  203993 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 09:35:16.932811  203993 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:35:16.935903  203993 out.go:179] * Using Docker driver with root privileges
	I1025 09:35:16.938743  203993 cni.go:84] Creating CNI manager for ""
	I1025 09:35:16.938821  203993 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:35:16.938837  203993 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 09:35:16.938916  203993 start.go:349] cluster config:
	{Name:default-k8s-diff-port-666079 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-666079 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:35:16.942155  203993 out.go:179] * Starting "default-k8s-diff-port-666079" primary control-plane node in "default-k8s-diff-port-666079" cluster
	I1025 09:35:16.945027  203993 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 09:35:16.948068  203993 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:35:16.950880  203993 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:35:16.950938  203993 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21796-2312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1025 09:35:16.950953  203993 cache.go:58] Caching tarball of preloaded images
	I1025 09:35:16.951037  203993 preload.go:233] Found /home/jenkins/minikube-integration/21796-2312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 09:35:16.951051  203993 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 09:35:16.951169  203993 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/config.json ...
	I1025 09:35:16.951192  203993 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/config.json: {Name:mked4acc6ba01c7e06ccc90737ed7af84ba155de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:35:16.951353  203993 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:35:16.971030  203993 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 09:35:16.971058  203993 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 09:35:16.971071  203993 cache.go:232] Successfully downloaded all kic artifacts
	I1025 09:35:16.971094  203993 start.go:360] acquireMachinesLock for default-k8s-diff-port-666079: {Name:mk25f9f0a43388f7cdd9c3ecfcc6756ef82b00a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:35:16.971199  203993 start.go:364] duration metric: took 86.745µs to acquireMachinesLock for "default-k8s-diff-port-666079"
	I1025 09:35:16.971243  203993 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-666079 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-666079 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:35:16.971324  203993 start.go:125] createHost starting for "" (driver="docker")
	W1025 09:35:17.342456  200380 pod_ready.go:104] pod "coredns-66bc5c9577-vgz5x" is not "Ready", error: <nil>
	W1025 09:35:19.343507  200380 pod_ready.go:104] pod "coredns-66bc5c9577-vgz5x" is not "Ready", error: <nil>
	I1025 09:35:16.974805  203993 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1025 09:35:16.975042  203993 start.go:159] libmachine.API.Create for "default-k8s-diff-port-666079" (driver="docker")
	I1025 09:35:16.975084  203993 client.go:168] LocalClient.Create starting
	I1025 09:35:16.975167  203993 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem
	I1025 09:35:16.975205  203993 main.go:141] libmachine: Decoding PEM data...
	I1025 09:35:16.975222  203993 main.go:141] libmachine: Parsing certificate...
	I1025 09:35:16.975278  203993 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem
	I1025 09:35:16.975302  203993 main.go:141] libmachine: Decoding PEM data...
	I1025 09:35:16.975312  203993 main.go:141] libmachine: Parsing certificate...
	I1025 09:35:16.975708  203993 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-666079 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 09:35:16.992268  203993 cli_runner.go:211] docker network inspect default-k8s-diff-port-666079 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 09:35:16.992367  203993 network_create.go:284] running [docker network inspect default-k8s-diff-port-666079] to gather additional debugging logs...
	I1025 09:35:16.992388  203993 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-666079
	W1025 09:35:17.014851  203993 cli_runner.go:211] docker network inspect default-k8s-diff-port-666079 returned with exit code 1
	I1025 09:35:17.014884  203993 network_create.go:287] error running [docker network inspect default-k8s-diff-port-666079]: docker network inspect default-k8s-diff-port-666079: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-666079 not found
	I1025 09:35:17.014916  203993 network_create.go:289] output of [docker network inspect default-k8s-diff-port-666079]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-666079 not found
	
	** /stderr **
	I1025 09:35:17.015030  203993 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:35:17.031591  203993 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-4076b76bdd01 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5e:93:ad:e4:3e:11} reservation:<nil>}
	I1025 09:35:17.032017  203993 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ab40ae949743 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ea:83:23:78:ca:4d} reservation:<nil>}
	I1025 09:35:17.032340  203993 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ff3fdd90dcc2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8e:d4:a3:43:c3:da} reservation:<nil>}
	I1025 09:35:17.032868  203993 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a3b0f0}
	I1025 09:35:17.032893  203993 network_create.go:124] attempt to create docker network default-k8s-diff-port-666079 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1025 09:35:17.033090  203993 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-666079 default-k8s-diff-port-666079
	I1025 09:35:17.107320  203993 network_create.go:108] docker network default-k8s-diff-port-666079 192.168.76.0/24 created
	I1025 09:35:17.107354  203993 kic.go:121] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-666079" container
	I1025 09:35:17.107425  203993 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 09:35:17.125191  203993 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-666079 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-666079 --label created_by.minikube.sigs.k8s.io=true
	I1025 09:35:17.143548  203993 oci.go:103] Successfully created a docker volume default-k8s-diff-port-666079
	I1025 09:35:17.143640  203993 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-666079-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-666079 --entrypoint /usr/bin/test -v default-k8s-diff-port-666079:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1025 09:35:17.750870  203993 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-666079
	I1025 09:35:17.750925  203993 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:35:17.750945  203993 kic.go:194] Starting extracting preloaded images to volume ...
	I1025 09:35:17.751023  203993 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21796-2312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-666079:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	W1025 09:35:21.843135  200380 pod_ready.go:104] pod "coredns-66bc5c9577-vgz5x" is not "Ready", error: <nil>
	I1025 09:35:23.343278  200380 pod_ready.go:94] pod "coredns-66bc5c9577-vgz5x" is "Ready"
	I1025 09:35:23.343309  200380 pod_ready.go:86] duration metric: took 33.006542057s for pod "coredns-66bc5c9577-vgz5x" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:35:23.348335  200380 pod_ready.go:83] waiting for pod "etcd-embed-certs-173264" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:35:23.354920  200380 pod_ready.go:94] pod "etcd-embed-certs-173264" is "Ready"
	I1025 09:35:23.354947  200380 pod_ready.go:86] duration metric: took 6.579755ms for pod "etcd-embed-certs-173264" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:35:23.357667  200380 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-173264" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:35:23.365179  200380 pod_ready.go:94] pod "kube-apiserver-embed-certs-173264" is "Ready"
	I1025 09:35:23.365204  200380 pod_ready.go:86] duration metric: took 7.50359ms for pod "kube-apiserver-embed-certs-173264" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:35:23.368764  200380 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-173264" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:35:23.540850  200380 pod_ready.go:94] pod "kube-controller-manager-embed-certs-173264" is "Ready"
	I1025 09:35:23.540918  200380 pod_ready.go:86] duration metric: took 172.126162ms for pod "kube-controller-manager-embed-certs-173264" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:35:23.740893  200380 pod_ready.go:83] waiting for pod "kube-proxy-gwv98" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:35:24.141134  200380 pod_ready.go:94] pod "kube-proxy-gwv98" is "Ready"
	I1025 09:35:24.141159  200380 pod_ready.go:86] duration metric: took 400.235889ms for pod "kube-proxy-gwv98" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:35:24.341369  200380 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-173264" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:35:24.740202  200380 pod_ready.go:94] pod "kube-scheduler-embed-certs-173264" is "Ready"
	I1025 09:35:24.740233  200380 pod_ready.go:86] duration metric: took 398.838291ms for pod "kube-scheduler-embed-certs-173264" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:35:24.740246  200380 pod_ready.go:40] duration metric: took 34.408609097s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:35:24.791433  200380 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1025 09:35:24.794852  200380 out.go:179] * Done! kubectl is now configured to use "embed-certs-173264" cluster and "default" namespace by default
	I1025 09:35:22.187900  203993 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21796-2312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-666079:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.43682399s)
	I1025 09:35:22.187934  203993 kic.go:203] duration metric: took 4.43698679s to extract preloaded images to volume ...
	W1025 09:35:22.188083  203993 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1025 09:35:22.188204  203993 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 09:35:22.262941  203993 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-666079 --name default-k8s-diff-port-666079 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-666079 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-666079 --network default-k8s-diff-port-666079 --ip 192.168.76.2 --volume default-k8s-diff-port-666079:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1025 09:35:22.595829  203993 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-666079 --format={{.State.Running}}
	I1025 09:35:22.617102  203993 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-666079 --format={{.State.Status}}
	I1025 09:35:22.655541  203993 cli_runner.go:164] Run: docker exec default-k8s-diff-port-666079 stat /var/lib/dpkg/alternatives/iptables
	I1025 09:35:22.710682  203993 oci.go:144] the created container "default-k8s-diff-port-666079" has a running status.
	I1025 09:35:22.710708  203993 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21796-2312/.minikube/machines/default-k8s-diff-port-666079/id_rsa...
	I1025 09:35:23.117018  203993 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21796-2312/.minikube/machines/default-k8s-diff-port-666079/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 09:35:23.155804  203993 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-666079 --format={{.State.Status}}
	I1025 09:35:23.179628  203993 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 09:35:23.179647  203993 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-666079 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 09:35:23.236844  203993 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-666079 --format={{.State.Status}}
	I1025 09:35:23.260857  203993 machine.go:93] provisionDockerMachine start ...
	I1025 09:35:23.260957  203993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-666079
	I1025 09:35:23.286527  203993 main.go:141] libmachine: Using SSH client type: native
	I1025 09:35:23.286862  203993 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1025 09:35:23.286872  203993 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:35:23.287626  203993 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1025 09:35:26.433553  203993 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-666079
	
	I1025 09:35:26.433577  203993 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-666079"
	I1025 09:35:26.433697  203993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-666079
	I1025 09:35:26.451631  203993 main.go:141] libmachine: Using SSH client type: native
	I1025 09:35:26.451955  203993 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1025 09:35:26.451971  203993 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-666079 && echo "default-k8s-diff-port-666079" | sudo tee /etc/hostname
	I1025 09:35:26.615279  203993 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-666079
	
	I1025 09:35:26.615400  203993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-666079
	I1025 09:35:26.636584  203993 main.go:141] libmachine: Using SSH client type: native
	I1025 09:35:26.636928  203993 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1025 09:35:26.636954  203993 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-666079' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-666079/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-666079' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:35:26.786297  203993 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:35:26.786367  203993 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21796-2312/.minikube CaCertPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21796-2312/.minikube}
	I1025 09:35:26.786413  203993 ubuntu.go:190] setting up certificates
	I1025 09:35:26.786455  203993 provision.go:84] configureAuth start
	I1025 09:35:26.786568  203993 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-666079
	I1025 09:35:26.803806  203993 provision.go:143] copyHostCerts
	I1025 09:35:26.803881  203993 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-2312/.minikube/ca.pem, removing ...
	I1025 09:35:26.803894  203993 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-2312/.minikube/ca.pem
	I1025 09:35:26.803974  203993 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21796-2312/.minikube/ca.pem (1082 bytes)
	I1025 09:35:26.804079  203993 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-2312/.minikube/cert.pem, removing ...
	I1025 09:35:26.804088  203993 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-2312/.minikube/cert.pem
	I1025 09:35:26.804117  203993 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21796-2312/.minikube/cert.pem (1123 bytes)
	I1025 09:35:26.804213  203993 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-2312/.minikube/key.pem, removing ...
	I1025 09:35:26.804224  203993 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-2312/.minikube/key.pem
	I1025 09:35:26.804248  203993 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21796-2312/.minikube/key.pem (1679 bytes)
	I1025 09:35:26.804302  203993 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21796-2312/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-666079 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-666079 localhost minikube]
	I1025 09:35:27.281536  203993 provision.go:177] copyRemoteCerts
	I1025 09:35:27.281615  203993 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:35:27.281657  203993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-666079
	I1025 09:35:27.299201  203993 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/default-k8s-diff-port-666079/id_rsa Username:docker}
	I1025 09:35:27.406406  203993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 09:35:27.432457  203993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1025 09:35:27.449732  203993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 09:35:27.467224  203993 provision.go:87] duration metric: took 680.7321ms to configureAuth
	I1025 09:35:27.467252  203993 ubuntu.go:206] setting minikube options for container-runtime
	I1025 09:35:27.467500  203993 config.go:182] Loaded profile config "default-k8s-diff-port-666079": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:35:27.467658  203993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-666079
	I1025 09:35:27.485078  203993 main.go:141] libmachine: Using SSH client type: native
	I1025 09:35:27.485400  203993 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1025 09:35:27.485420  203993 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 09:35:27.828831  203993 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:35:27.828851  203993 machine.go:96] duration metric: took 4.567974893s to provisionDockerMachine
	I1025 09:35:27.828860  203993 client.go:171] duration metric: took 10.853765254s to LocalClient.Create
	I1025 09:35:27.828883  203993 start.go:167] duration metric: took 10.853842333s to libmachine.API.Create "default-k8s-diff-port-666079"
	I1025 09:35:27.828892  203993 start.go:293] postStartSetup for "default-k8s-diff-port-666079" (driver="docker")
	I1025 09:35:27.828902  203993 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:35:27.828967  203993 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:35:27.829006  203993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-666079
	I1025 09:35:27.847386  203993 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/default-k8s-diff-port-666079/id_rsa Username:docker}
	I1025 09:35:27.954424  203993 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:35:27.957890  203993 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 09:35:27.957924  203993 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 09:35:27.957937  203993 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-2312/.minikube/addons for local assets ...
	I1025 09:35:27.958039  203993 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-2312/.minikube/files for local assets ...
	I1025 09:35:27.958136  203993 filesync.go:149] local asset: /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem -> 41102.pem in /etc/ssl/certs
	I1025 09:35:27.958251  203993 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 09:35:27.966274  203993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem --> /etc/ssl/certs/41102.pem (1708 bytes)
	I1025 09:35:27.985493  203993 start.go:296] duration metric: took 156.587247ms for postStartSetup
	I1025 09:35:27.985865  203993 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-666079
	I1025 09:35:28.010877  203993 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/config.json ...
	I1025 09:35:28.011202  203993 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:35:28.011257  203993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-666079
	I1025 09:35:28.030749  203993 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/default-k8s-diff-port-666079/id_rsa Username:docker}
	I1025 09:35:28.135614  203993 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 09:35:28.140965  203993 start.go:128] duration metric: took 11.169625999s to createHost
	I1025 09:35:28.140988  203993 start.go:83] releasing machines lock for "default-k8s-diff-port-666079", held for 11.169772725s
	I1025 09:35:28.141058  203993 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-666079
	I1025 09:35:28.158845  203993 ssh_runner.go:195] Run: cat /version.json
	I1025 09:35:28.158905  203993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-666079
	I1025 09:35:28.158972  203993 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:35:28.159026  203993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-666079
	I1025 09:35:28.176938  203993 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/default-k8s-diff-port-666079/id_rsa Username:docker}
	I1025 09:35:28.180011  203993 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/default-k8s-diff-port-666079/id_rsa Username:docker}
	I1025 09:35:28.281749  203993 ssh_runner.go:195] Run: systemctl --version
	I1025 09:35:28.374612  203993 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:35:28.412295  203993 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:35:28.416957  203993 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:35:28.417031  203993 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:35:28.454106  203993 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1025 09:35:28.454127  203993 start.go:495] detecting cgroup driver to use...
	I1025 09:35:28.454160  203993 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 09:35:28.454213  203993 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:35:28.471570  203993 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:35:28.484603  203993 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:35:28.484673  203993 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:35:28.502821  203993 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:35:28.530299  203993 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:35:28.656161  203993 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:35:28.791083  203993 docker.go:234] disabling docker service ...
	I1025 09:35:28.791155  203993 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:35:28.814761  203993 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:35:28.828564  203993 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:35:28.953712  203993 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:35:29.081097  203993 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:35:29.095274  203993 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:35:29.109128  203993 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 09:35:29.109196  203993 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:35:29.118153  203993 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 09:35:29.118272  203993 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:35:29.128419  203993 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:35:29.137200  203993 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:35:29.145717  203993 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:35:29.153783  203993 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:35:29.162986  203993 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:35:29.177335  203993 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:35:29.186373  203993 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:35:29.193845  203993 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:35:29.201610  203993 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:35:29.318025  203993 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:35:29.463651  203993 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:35:29.463788  203993 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:35:29.467787  203993 start.go:563] Will wait 60s for crictl version
	I1025 09:35:29.467920  203993 ssh_runner.go:195] Run: which crictl
	I1025 09:35:29.471723  203993 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 09:35:29.504308  203993 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 09:35:29.504468  203993 ssh_runner.go:195] Run: crio --version
	I1025 09:35:29.546286  203993 ssh_runner.go:195] Run: crio --version
	I1025 09:35:29.579337  203993 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 09:35:29.582205  203993 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-666079 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:35:29.598849  203993 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1025 09:35:29.602695  203993 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:35:29.612887  203993 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-666079 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-666079 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 09:35:29.613015  203993 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:35:29.613075  203993 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:35:29.649838  203993 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:35:29.649864  203993 crio.go:433] Images already preloaded, skipping extraction
	I1025 09:35:29.649920  203993 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:35:29.675917  203993 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:35:29.675939  203993 cache_images.go:85] Images are preloaded, skipping loading
	I1025 09:35:29.675947  203993 kubeadm.go:934] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1025 09:35:29.676092  203993 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-666079 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-666079 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 09:35:29.676178  203993 ssh_runner.go:195] Run: crio config
	I1025 09:35:29.739933  203993 cni.go:84] Creating CNI manager for ""
	I1025 09:35:29.739961  203993 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:35:29.740009  203993 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 09:35:29.740042  203993 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-666079 NodeName:default-k8s-diff-port-666079 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:35:29.740242  203993 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-666079"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:35:29.740342  203993 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 09:35:29.748160  203993 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 09:35:29.748268  203993 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:35:29.755652  203993 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1025 09:35:29.768491  203993 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:35:29.781294  203993 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1025 09:35:29.794711  203993 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1025 09:35:29.799438  203993 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:35:29.809393  203993 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:35:29.929912  203993 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:35:29.948052  203993 certs.go:69] Setting up /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079 for IP: 192.168.76.2
	I1025 09:35:29.948089  203993 certs.go:195] generating shared ca certs ...
	I1025 09:35:29.948123  203993 certs.go:227] acquiring lock for ca certs: {Name:mke81a91ad41248e67f7acba9fb2e71e1c110e7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:35:29.948322  203993 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21796-2312/.minikube/ca.key
	I1025 09:35:29.948392  203993 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.key
	I1025 09:35:29.948405  203993 certs.go:257] generating profile certs ...
	I1025 09:35:29.948479  203993 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/client.key
	I1025 09:35:29.948497  203993 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/client.crt with IP's: []
	I1025 09:35:30.581675  203993 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/client.crt ...
	I1025 09:35:30.581713  203993 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/client.crt: {Name:mkd6f8f4eed87bbc3b8a62ed0863f4de58c2de6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:35:30.581921  203993 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/client.key ...
	I1025 09:35:30.581945  203993 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/client.key: {Name:mk2010ec0938e0114aafcd1480a556e177e64b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:35:30.582077  203993 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/apiserver.key.f342de6b
	I1025 09:35:30.582099  203993 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/apiserver.crt.f342de6b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1025 09:35:31.413066  203993 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/apiserver.crt.f342de6b ...
	I1025 09:35:31.413098  203993 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/apiserver.crt.f342de6b: {Name:mk297ed3268847965db21d2ec9fa2837a9b902e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:35:31.413311  203993 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/apiserver.key.f342de6b ...
	I1025 09:35:31.413327  203993 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/apiserver.key.f342de6b: {Name:mk70e34533dd755d2c65936b0b288cebda13ad2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:35:31.413418  203993 certs.go:382] copying /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/apiserver.crt.f342de6b -> /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/apiserver.crt
	I1025 09:35:31.413509  203993 certs.go:386] copying /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/apiserver.key.f342de6b -> /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/apiserver.key
	I1025 09:35:31.413574  203993 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/proxy-client.key
	I1025 09:35:31.413593  203993 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/proxy-client.crt with IP's: []
	I1025 09:35:32.176268  203993 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/proxy-client.crt ...
	I1025 09:35:32.176297  203993 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/proxy-client.crt: {Name:mk1a5138837f9b473aea352f22f36cc4e4d38e20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:35:32.176462  203993 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/proxy-client.key ...
	I1025 09:35:32.176478  203993 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/proxy-client.key: {Name:mkf8e3e716007e3b8dde8195a5217860b40b497e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:35:32.176650  203993 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/4110.pem (1338 bytes)
	W1025 09:35:32.176696  203993 certs.go:480] ignoring /home/jenkins/minikube-integration/21796-2312/.minikube/certs/4110_empty.pem, impossibly tiny 0 bytes
	I1025 09:35:32.176710  203993 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 09:35:32.176750  203993 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem (1082 bytes)
	I1025 09:35:32.176778  203993 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:35:32.176804  203993 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/key.pem (1679 bytes)
	I1025 09:35:32.176850  203993 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem (1708 bytes)
	I1025 09:35:32.177483  203993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:35:32.195073  203993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 09:35:32.212814  203993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:35:32.231737  203993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 09:35:32.250660  203993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1025 09:35:32.268661  203993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 09:35:32.287914  203993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:35:32.306500  203993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 09:35:32.325931  203993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:35:32.343925  203993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/certs/4110.pem --> /usr/share/ca-certificates/4110.pem (1338 bytes)
	I1025 09:35:32.362163  203993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem --> /usr/share/ca-certificates/41102.pem (1708 bytes)
	I1025 09:35:32.380544  203993 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:35:32.393854  203993 ssh_runner.go:195] Run: openssl version
	I1025 09:35:32.400358  203993 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:35:32.408967  203993 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:35:32.412928  203993 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:35:32.413034  203993 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:35:32.455731  203993 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:35:32.464228  203993 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4110.pem && ln -fs /usr/share/ca-certificates/4110.pem /etc/ssl/certs/4110.pem"
	I1025 09:35:32.473386  203993 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4110.pem
	I1025 09:35:32.477282  203993 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 08:37 /usr/share/ca-certificates/4110.pem
	I1025 09:35:32.477346  203993 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4110.pem
	I1025 09:35:32.518955  203993 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4110.pem /etc/ssl/certs/51391683.0"
	I1025 09:35:32.527537  203993 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41102.pem && ln -fs /usr/share/ca-certificates/41102.pem /etc/ssl/certs/41102.pem"
	I1025 09:35:32.536076  203993 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41102.pem
	I1025 09:35:32.539852  203993 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 08:37 /usr/share/ca-certificates/41102.pem
	I1025 09:35:32.539953  203993 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41102.pem
	I1025 09:35:32.581343  203993 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41102.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 09:35:32.589727  203993 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:35:32.593250  203993 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 09:35:32.593319  203993 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-666079 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-666079 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:35:32.593402  203993 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:35:32.593475  203993 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:35:32.619622  203993 cri.go:89] found id: ""
	I1025 09:35:32.619762  203993 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 09:35:32.627520  203993 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 09:35:32.635407  203993 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1025 09:35:32.635472  203993 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 09:35:32.644117  203993 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 09:35:32.644139  203993 kubeadm.go:157] found existing configuration files:
	
	I1025 09:35:32.644229  203993 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1025 09:35:32.653027  203993 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 09:35:32.653131  203993 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 09:35:32.660535  203993 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1025 09:35:32.668031  203993 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 09:35:32.668114  203993 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 09:35:32.675425  203993 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1025 09:35:32.683117  203993 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 09:35:32.683178  203993 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 09:35:32.690358  203993 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1025 09:35:32.698251  203993 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 09:35:32.698326  203993 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 09:35:32.706173  203993 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 09:35:32.747409  203993 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1025 09:35:32.747737  203993 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 09:35:32.773884  203993 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1025 09:35:32.774061  203993 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1025 09:35:32.774114  203993 kubeadm.go:318] OS: Linux
	I1025 09:35:32.774189  203993 kubeadm.go:318] CGROUPS_CPU: enabled
	I1025 09:35:32.774250  203993 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1025 09:35:32.774306  203993 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1025 09:35:32.774373  203993 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1025 09:35:32.774436  203993 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1025 09:35:32.774500  203993 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1025 09:35:32.774553  203993 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1025 09:35:32.774617  203993 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1025 09:35:32.774672  203993 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1025 09:35:32.856395  203993 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 09:35:32.856513  203993 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 09:35:32.856614  203993 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 09:35:32.868500  203993 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 09:35:32.874262  203993 out.go:252]   - Generating certificates and keys ...
	I1025 09:35:32.874358  203993 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 09:35:32.874435  203993 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 09:35:33.140121  203993 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 09:35:33.611471  203993 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 09:35:34.705009  203993 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 09:35:35.223586  203993 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 09:35:35.423755  203993 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 09:35:35.423916  203993 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-666079 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1025 09:35:35.862633  203993 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 09:35:35.863026  203993 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-666079 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1025 09:35:36.627822  203993 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 09:35:37.269693  203993 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 09:35:37.874923  203993 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 09:35:37.875003  203993 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 09:35:38.485398  203993 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 09:35:38.946752  203993 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1025 09:35:39.343264  203993 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 09:35:39.615017  203993 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 09:35:39.833818  203993 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 09:35:39.837805  203993 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 09:35:39.840995  203993 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 09:35:39.844666  203993 out.go:252]   - Booting up control plane ...
	I1025 09:35:39.844786  203993 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 09:35:39.844868  203993 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 09:35:39.846735  203993 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 09:35:39.868456  203993 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 09:35:39.868865  203993 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1025 09:35:39.884206  203993 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1025 09:35:39.884976  203993 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 09:35:39.885212  203993 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1025 09:35:40.055718  203993 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1025 09:35:40.055845  203993 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1025 09:35:41.061553  203993 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.002078079s
	I1025 09:35:41.061678  203993 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1025 09:35:41.061763  203993 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8444/livez
	I1025 09:35:41.061864  203993 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1025 09:35:41.061953  203993 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	
	
	==> CRI-O <==
	Oct 25 09:35:20 embed-certs-173264 crio[652]: time="2025-10-25T09:35:20.385626805Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=1df58a46-7d3d-4fe7-ac88-f0830ba5b77c name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:35:20 embed-certs-173264 crio[652]: time="2025-10-25T09:35:20.387075511Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=2b0b3e7b-c626-4a71-aba0-bec8450b8148 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:35:20 embed-certs-173264 crio[652]: time="2025-10-25T09:35:20.387190352Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:35:20 embed-certs-173264 crio[652]: time="2025-10-25T09:35:20.397243462Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:35:20 embed-certs-173264 crio[652]: time="2025-10-25T09:35:20.397438976Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/75a434fe13aa89bcaa3f46c899bee7a4cfcc44e284a72f2f3f21627356062229/merged/etc/passwd: no such file or directory"
	Oct 25 09:35:20 embed-certs-173264 crio[652]: time="2025-10-25T09:35:20.397470049Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/75a434fe13aa89bcaa3f46c899bee7a4cfcc44e284a72f2f3f21627356062229/merged/etc/group: no such file or directory"
	Oct 25 09:35:20 embed-certs-173264 crio[652]: time="2025-10-25T09:35:20.397797314Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:35:20 embed-certs-173264 crio[652]: time="2025-10-25T09:35:20.431733419Z" level=info msg="Created container 554118d5ba9b888eff74911aeb9bc49200cf2e408aa2932423cd99c6fddc0070: kube-system/storage-provisioner/storage-provisioner" id=2b0b3e7b-c626-4a71-aba0-bec8450b8148 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:35:20 embed-certs-173264 crio[652]: time="2025-10-25T09:35:20.432810265Z" level=info msg="Starting container: 554118d5ba9b888eff74911aeb9bc49200cf2e408aa2932423cd99c6fddc0070" id=840edb25-3bfe-4068-a13b-11acf9c4875c name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:35:20 embed-certs-173264 crio[652]: time="2025-10-25T09:35:20.449024023Z" level=info msg="Started container" PID=1639 containerID=554118d5ba9b888eff74911aeb9bc49200cf2e408aa2932423cd99c6fddc0070 description=kube-system/storage-provisioner/storage-provisioner id=840edb25-3bfe-4068-a13b-11acf9c4875c name=/runtime.v1.RuntimeService/StartContainer sandboxID=9dfd97285022d098c5fa00e972d7558c4efcd73644e5cbcf5f6d46ec0791451a
	Oct 25 09:35:29 embed-certs-173264 crio[652]: time="2025-10-25T09:35:29.828696385Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 09:35:29 embed-certs-173264 crio[652]: time="2025-10-25T09:35:29.83355552Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 09:35:29 embed-certs-173264 crio[652]: time="2025-10-25T09:35:29.833941223Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 09:35:29 embed-certs-173264 crio[652]: time="2025-10-25T09:35:29.834133397Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 09:35:29 embed-certs-173264 crio[652]: time="2025-10-25T09:35:29.843789931Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 09:35:29 embed-certs-173264 crio[652]: time="2025-10-25T09:35:29.843938594Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 09:35:29 embed-certs-173264 crio[652]: time="2025-10-25T09:35:29.84401016Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 09:35:29 embed-certs-173264 crio[652]: time="2025-10-25T09:35:29.848825208Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 09:35:29 embed-certs-173264 crio[652]: time="2025-10-25T09:35:29.848864331Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 09:35:29 embed-certs-173264 crio[652]: time="2025-10-25T09:35:29.848886509Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 09:35:29 embed-certs-173264 crio[652]: time="2025-10-25T09:35:29.852031171Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 09:35:29 embed-certs-173264 crio[652]: time="2025-10-25T09:35:29.85217843Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 09:35:29 embed-certs-173264 crio[652]: time="2025-10-25T09:35:29.85225898Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 09:35:29 embed-certs-173264 crio[652]: time="2025-10-25T09:35:29.856006808Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 09:35:29 embed-certs-173264 crio[652]: time="2025-10-25T09:35:29.8561589Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	554118d5ba9b8       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           22 seconds ago      Running             storage-provisioner         2                   9dfd97285022d       storage-provisioner                          kube-system
	d635ff4cda826       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           27 seconds ago      Exited              dashboard-metrics-scraper   2                   9992589c39b0f       dashboard-metrics-scraper-6ffb444bf9-h6zck   kubernetes-dashboard
	bb24bf5fb5b2b       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   39 seconds ago      Running             kubernetes-dashboard        0                   4874621ec6a99       kubernetes-dashboard-855c9754f9-sj8dq        kubernetes-dashboard
	b3142984c1fee       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           53 seconds ago      Running             coredns                     1                   e44d3bb532542       coredns-66bc5c9577-vgz5x                     kube-system
	7dc2d09da875d       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           53 seconds ago      Running             busybox                     1                   923bde58161a8       busybox                                      default
	1cfe71fd1fe7a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           53 seconds ago      Exited              storage-provisioner         1                   9dfd97285022d       storage-provisioner                          kube-system
	39a267437a269       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           53 seconds ago      Running             kube-proxy                  1                   1f387f1e79854       kube-proxy-gwv98                             kube-system
	0b944f32fb8b5       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           53 seconds ago      Running             kindnet-cni                 1                   f8bcd9fd6305e       kindnet-862lz                                kube-system
	ddedd8b79fda0       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           59 seconds ago      Running             etcd                        1                   cc27e2829ddb0       etcd-embed-certs-173264                      kube-system
	d6ad6127ca83d       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           59 seconds ago      Running             kube-controller-manager     1                   c294c59dc6881       kube-controller-manager-embed-certs-173264   kube-system
	8f686a2912e6c       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           59 seconds ago      Running             kube-apiserver              1                   cb61274475e44       kube-apiserver-embed-certs-173264            kube-system
	0bd9ad4a66788       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           59 seconds ago      Running             kube-scheduler              1                   8747767de132d       kube-scheduler-embed-certs-173264            kube-system
	
	
	==> coredns [b3142984c1fee9ffc110ab096d6a0855ca60948dfef193f2d783f2d20bd9886e] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42199 - 24989 "HINFO IN 5904666286340328563.1330875012800359847. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014940983s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-173264
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-173264
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3de5ce1075e776fa55a26fe8396669cc53a4373
	                    minikube.k8s.io/name=embed-certs-173264
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_33_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:33:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-173264
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:35:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:35:19 +0000   Sat, 25 Oct 2025 09:33:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:35:19 +0000   Sat, 25 Oct 2025 09:33:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:35:19 +0000   Sat, 25 Oct 2025 09:33:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:35:19 +0000   Sat, 25 Oct 2025 09:34:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-173264
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                fd815287-48cc-43e1-a791-5bcdc882763d
	  Boot ID:                    89aa6167-e700-462e-8969-7739a9f33dc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-66bc5c9577-vgz5x                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m16s
	  kube-system                 etcd-embed-certs-173264                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m22s
	  kube-system                 kindnet-862lz                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m16s
	  kube-system                 kube-apiserver-embed-certs-173264             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-controller-manager-embed-certs-173264    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-proxy-gwv98                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m16s
	  kube-system                 kube-scheduler-embed-certs-173264             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m15s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-h6zck    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-sj8dq         0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m14s                  kube-proxy       
	  Normal   Starting                 53s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m31s (x8 over 2m31s)  kubelet          Node embed-certs-173264 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m31s (x8 over 2m31s)  kubelet          Node embed-certs-173264 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m31s (x8 over 2m31s)  kubelet          Node embed-certs-173264 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m21s                  kubelet          Node embed-certs-173264 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m21s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m21s                  kubelet          Node embed-certs-173264 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m21s                  kubelet          Node embed-certs-173264 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m21s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m17s                  node-controller  Node embed-certs-173264 event: Registered Node embed-certs-173264 in Controller
	  Normal   NodeReady                95s                    kubelet          Node embed-certs-173264 status is now: NodeReady
	  Normal   Starting                 60s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 60s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  60s (x8 over 60s)      kubelet          Node embed-certs-173264 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s (x8 over 60s)      kubelet          Node embed-certs-173264 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s (x8 over 60s)      kubelet          Node embed-certs-173264 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           52s                    node-controller  Node embed-certs-173264 event: Registered Node embed-certs-173264 in Controller
	
	
	==> dmesg <==
	[Oct25 09:13] overlayfs: idmapped layers are currently not supported
	[ +18.632418] overlayfs: idmapped layers are currently not supported
	[ +13.271347] overlayfs: idmapped layers are currently not supported
	[Oct25 09:14] overlayfs: idmapped layers are currently not supported
	[Oct25 09:15] overlayfs: idmapped layers are currently not supported
	[ +40.253798] overlayfs: idmapped layers are currently not supported
	[Oct25 09:16] overlayfs: idmapped layers are currently not supported
	[Oct25 09:17] overlayfs: idmapped layers are currently not supported
	[Oct25 09:18] overlayfs: idmapped layers are currently not supported
	[  +9.191019] hrtimer: interrupt took 13462946 ns
	[Oct25 09:20] overlayfs: idmapped layers are currently not supported
	[Oct25 09:22] overlayfs: idmapped layers are currently not supported
	[Oct25 09:23] overlayfs: idmapped layers are currently not supported
	[Oct25 09:24] overlayfs: idmapped layers are currently not supported
	[Oct25 09:28] overlayfs: idmapped layers are currently not supported
	[ +37.283444] overlayfs: idmapped layers are currently not supported
	[Oct25 09:29] overlayfs: idmapped layers are currently not supported
	[ +38.328802] overlayfs: idmapped layers are currently not supported
	[Oct25 09:30] overlayfs: idmapped layers are currently not supported
	[Oct25 09:31] overlayfs: idmapped layers are currently not supported
	[Oct25 09:32] overlayfs: idmapped layers are currently not supported
	[Oct25 09:33] overlayfs: idmapped layers are currently not supported
	[Oct25 09:34] overlayfs: idmapped layers are currently not supported
	[ +28.289706] overlayfs: idmapped layers are currently not supported
	[Oct25 09:35] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [ddedd8b79fda0b2dea509f8022b5abaeb8024fdc8737fab8024dee99c98d3b19] <==
	{"level":"warn","ts":"2025-10-25T09:34:47.078928Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:47.090918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:47.111298Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:47.129634Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:47.152846Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:47.168762Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:47.189249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:47.202449Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:47.231963Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:47.252764Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:47.269874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:47.289893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:47.312297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:47.338672Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:47.343636Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:47.362381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:47.389086Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:47.436837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:47.447534Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:47.489943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:47.490813Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:47.538178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:47.542737Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:47.559667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:47.665802Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44016","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:35:43 up  1:18,  0 user,  load average: 5.14, 3.95, 3.04
	Linux embed-certs-173264 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0b944f32fb8b576132d82481361923d19fdefbaee98287df71455fe148002ac8] <==
	I1025 09:34:49.629649       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 09:34:49.630140       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1025 09:34:49.630320       1 main.go:148] setting mtu 1500 for CNI 
	I1025 09:34:49.630368       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 09:34:49.630400       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T09:34:49Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 09:34:49.915286       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 09:34:49.915320       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 09:34:49.915420       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 09:34:49.916676       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1025 09:35:19.826120       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1025 09:35:19.916792       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1025 09:35:19.916903       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1025 09:35:19.917021       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1025 09:35:21.115665       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 09:35:21.115789       1 metrics.go:72] Registering metrics
	I1025 09:35:21.115856       1 controller.go:711] "Syncing nftables rules"
	I1025 09:35:29.828424       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 09:35:29.828461       1 main.go:301] handling current node
	I1025 09:35:39.834134       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 09:35:39.834165       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8f686a2912e6c0a8d6e4d5311cba470140c0c77f3d59d36367a840a7e2c18a5b] <==
	I1025 09:34:48.725548       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 09:34:48.725828       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	E1025 09:34:48.747460       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1025 09:34:48.752580       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:34:48.756442       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1025 09:34:48.756487       1 policy_source.go:240] refreshing policies
	I1025 09:34:48.800711       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1025 09:34:48.800927       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 09:34:48.800967       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1025 09:34:48.800980       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 09:34:48.801555       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1025 09:34:48.801567       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1025 09:34:48.813161       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1025 09:34:48.831211       1 cache.go:39] Caches are synced for autoregister controller
	I1025 09:34:49.102719       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 09:34:49.308444       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 09:34:49.312026       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 09:34:49.392945       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 09:34:49.463601       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 09:34:49.509846       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 09:34:49.768131       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.161.202"}
	I1025 09:34:49.851249       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.1.106"}
	I1025 09:34:51.900253       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 09:34:52.279192       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 09:34:52.380899       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [d6ad6127ca83d1792eb0f03aca451cdd0a78c05c4baaecc9fd3ec902ddd40c88] <==
	I1025 09:34:51.902316       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1025 09:34:51.905727       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1025 09:34:51.907439       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1025 09:34:51.908570       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:34:51.915395       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:34:51.922119       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1025 09:34:51.922196       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1025 09:34:51.922272       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1025 09:34:51.922339       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1025 09:34:51.922437       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-173264"
	I1025 09:34:51.922489       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1025 09:34:51.922525       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1025 09:34:51.922553       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1025 09:34:51.922596       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1025 09:34:51.922181       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:34:51.922686       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 09:34:51.922716       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 09:34:51.924213       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1025 09:34:51.925384       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1025 09:34:51.926577       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1025 09:34:51.926650       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1025 09:34:51.926661       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1025 09:34:51.930135       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1025 09:34:51.932600       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1025 09:34:52.285177       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [39a267437a269c4557082094c09ced55f8c3e472342c82d7fd03ae3a25b0f17e] <==
	I1025 09:34:49.827031       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:34:49.950808       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:34:50.090308       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:34:50.111185       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1025 09:34:50.111369       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:34:50.166681       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:34:50.166810       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:34:50.186746       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:34:50.187154       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:34:50.188099       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:34:50.191696       1 config.go:200] "Starting service config controller"
	I1025 09:34:50.200030       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:34:50.200092       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:34:50.200098       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:34:50.200113       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:34:50.200117       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:34:50.204467       1 config.go:309] "Starting node config controller"
	I1025 09:34:50.204497       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:34:50.204506       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:34:50.300297       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 09:34:50.302137       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 09:34:50.302180       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [0bd9ad4a667885f7374b118b9cdffe51d851fae7ec99302cc3b3126ee7b47b5a] <==
	I1025 09:34:47.473769       1 serving.go:386] Generated self-signed cert in-memory
	W1025 09:34:48.460524       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 09:34:48.460574       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 09:34:48.460586       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 09:34:48.460593       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 09:34:48.575229       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 09:34:48.575264       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:34:48.577663       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 09:34:48.577777       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:34:48.577802       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:34:48.577823       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1025 09:34:48.678309       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": RBAC: [role.rbac.authorization.k8s.io \"extension-apiserver-authentication-reader\" not found, role.rbac.authorization.k8s.io \"system::leader-locking-kube-scheduler\" not found]" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1025 09:34:49.581601       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 09:34:52 embed-certs-173264 kubelet[779]: E1025 09:34:52.327850     779 projected.go:196] Error preparing data for projected volume kube-api-access-ml4l7 for pod kubernetes-dashboard/kubernetes-dashboard-855c9754f9-sj8dq: configmap "kube-root-ca.crt" not found
	Oct 25 09:34:52 embed-certs-173264 kubelet[779]: E1025 09:34:52.328988     779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/25fa6987-f91d-42a5-8ef0-848aca718f8c-kube-api-access-ml4l7 podName:25fa6987-f91d-42a5-8ef0-848aca718f8c nodeName:}" failed. No retries permitted until 2025-10-25 09:34:52.828295268 +0000 UTC m=+9.922960806 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ml4l7" (UniqueName: "kubernetes.io/projected/25fa6987-f91d-42a5-8ef0-848aca718f8c-kube-api-access-ml4l7") pod "kubernetes-dashboard-855c9754f9-sj8dq" (UID: "25fa6987-f91d-42a5-8ef0-848aca718f8c") : configmap "kube-root-ca.crt" not found
	Oct 25 09:34:52 embed-certs-173264 kubelet[779]: E1025 09:34:52.329217     779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5c1980d7-e996-4bf9-b950-107c19ab5941-kube-api-access-ggv52 podName:5c1980d7-e996-4bf9-b950-107c19ab5941 nodeName:}" failed. No retries permitted until 2025-10-25 09:34:52.829174565 +0000 UTC m=+9.923840103 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ggv52" (UniqueName: "kubernetes.io/projected/5c1980d7-e996-4bf9-b950-107c19ab5941-kube-api-access-ggv52") pod "dashboard-metrics-scraper-6ffb444bf9-h6zck" (UID: "5c1980d7-e996-4bf9-b950-107c19ab5941") : configmap "kube-root-ca.crt" not found
	Oct 25 09:34:53 embed-certs-173264 kubelet[779]: I1025 09:34:53.058282     779 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 25 09:34:53 embed-certs-173264 kubelet[779]: W1025 09:34:53.172231     779 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/7ab6ed1b9ea6708ddac2c85d334494b29a2d55d0bac7a2069e7f45087d3443ef/crio-9992589c39b0fa029903d806610e601102eea590f03e7dc663d777dbd3088a0f WatchSource:0}: Error finding container 9992589c39b0fa029903d806610e601102eea590f03e7dc663d777dbd3088a0f: Status 404 returned error can't find the container with id 9992589c39b0fa029903d806610e601102eea590f03e7dc663d777dbd3088a0f
	Oct 25 09:34:53 embed-certs-173264 kubelet[779]: W1025 09:34:53.179607     779 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/7ab6ed1b9ea6708ddac2c85d334494b29a2d55d0bac7a2069e7f45087d3443ef/crio-4874621ec6a99b3817ba8ced99c102f65fea0fadea6e133090b65bf43145da64 WatchSource:0}: Error finding container 4874621ec6a99b3817ba8ced99c102f65fea0fadea6e133090b65bf43145da64: Status 404 returned error can't find the container with id 4874621ec6a99b3817ba8ced99c102f65fea0fadea6e133090b65bf43145da64
	Oct 25 09:34:58 embed-certs-173264 kubelet[779]: I1025 09:34:58.216236     779 scope.go:117] "RemoveContainer" containerID="fdc0c4634f373a9d3eb8644b7f8f1367a73154717e63aeb3712718f163fd9fa7"
	Oct 25 09:34:59 embed-certs-173264 kubelet[779]: I1025 09:34:59.224409     779 scope.go:117] "RemoveContainer" containerID="c635b194523d6bb46b03d6567debf06efef47ab7249fcab83ccf4e5a6b461d3d"
	Oct 25 09:34:59 embed-certs-173264 kubelet[779]: E1025 09:34:59.224570     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-h6zck_kubernetes-dashboard(5c1980d7-e996-4bf9-b950-107c19ab5941)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-h6zck" podUID="5c1980d7-e996-4bf9-b950-107c19ab5941"
	Oct 25 09:34:59 embed-certs-173264 kubelet[779]: I1025 09:34:59.224727     779 scope.go:117] "RemoveContainer" containerID="fdc0c4634f373a9d3eb8644b7f8f1367a73154717e63aeb3712718f163fd9fa7"
	Oct 25 09:35:00 embed-certs-173264 kubelet[779]: I1025 09:35:00.323401     779 scope.go:117] "RemoveContainer" containerID="c635b194523d6bb46b03d6567debf06efef47ab7249fcab83ccf4e5a6b461d3d"
	Oct 25 09:35:00 embed-certs-173264 kubelet[779]: E1025 09:35:00.323598     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-h6zck_kubernetes-dashboard(5c1980d7-e996-4bf9-b950-107c19ab5941)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-h6zck" podUID="5c1980d7-e996-4bf9-b950-107c19ab5941"
	Oct 25 09:35:03 embed-certs-173264 kubelet[779]: I1025 09:35:03.104972     779 scope.go:117] "RemoveContainer" containerID="c635b194523d6bb46b03d6567debf06efef47ab7249fcab83ccf4e5a6b461d3d"
	Oct 25 09:35:03 embed-certs-173264 kubelet[779]: E1025 09:35:03.105160     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-h6zck_kubernetes-dashboard(5c1980d7-e996-4bf9-b950-107c19ab5941)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-h6zck" podUID="5c1980d7-e996-4bf9-b950-107c19ab5941"
	Oct 25 09:35:15 embed-certs-173264 kubelet[779]: I1025 09:35:15.079122     779 scope.go:117] "RemoveContainer" containerID="c635b194523d6bb46b03d6567debf06efef47ab7249fcab83ccf4e5a6b461d3d"
	Oct 25 09:35:15 embed-certs-173264 kubelet[779]: I1025 09:35:15.365503     779 scope.go:117] "RemoveContainer" containerID="c635b194523d6bb46b03d6567debf06efef47ab7249fcab83ccf4e5a6b461d3d"
	Oct 25 09:35:15 embed-certs-173264 kubelet[779]: I1025 09:35:15.365735     779 scope.go:117] "RemoveContainer" containerID="d635ff4cda8268a9abc07881a8b41b2e3801e381e546026d0ae96f16d82bfcd1"
	Oct 25 09:35:15 embed-certs-173264 kubelet[779]: E1025 09:35:15.365917     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-h6zck_kubernetes-dashboard(5c1980d7-e996-4bf9-b950-107c19ab5941)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-h6zck" podUID="5c1980d7-e996-4bf9-b950-107c19ab5941"
	Oct 25 09:35:15 embed-certs-173264 kubelet[779]: I1025 09:35:15.386348     779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-sj8dq" podStartSLOduration=13.614154313 podStartE2EDuration="23.384707201s" podCreationTimestamp="2025-10-25 09:34:52 +0000 UTC" firstStartedPulling="2025-10-25 09:34:53.183312701 +0000 UTC m=+10.277978239" lastFinishedPulling="2025-10-25 09:35:02.953865581 +0000 UTC m=+20.048531127" observedRunningTime="2025-10-25 09:35:03.350599555 +0000 UTC m=+20.445265101" watchObservedRunningTime="2025-10-25 09:35:15.384707201 +0000 UTC m=+32.479372755"
	Oct 25 09:35:20 embed-certs-173264 kubelet[779]: I1025 09:35:20.383281     779 scope.go:117] "RemoveContainer" containerID="1cfe71fd1fe7a1b6bea21b990d2a3dbcc5dd1b17f294993c302c09956f95be67"
	Oct 25 09:35:23 embed-certs-173264 kubelet[779]: I1025 09:35:23.107772     779 scope.go:117] "RemoveContainer" containerID="d635ff4cda8268a9abc07881a8b41b2e3801e381e546026d0ae96f16d82bfcd1"
	Oct 25 09:35:23 embed-certs-173264 kubelet[779]: E1025 09:35:23.108237     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-h6zck_kubernetes-dashboard(5c1980d7-e996-4bf9-b950-107c19ab5941)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-h6zck" podUID="5c1980d7-e996-4bf9-b950-107c19ab5941"
	Oct 25 09:35:37 embed-certs-173264 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 09:35:37 embed-certs-173264 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 09:35:37 embed-certs-173264 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [bb24bf5fb5b2b235640af84b0d07ce607dc3657b1f1f6569aef374242880b7fb] <==
	2025/10/25 09:35:03 Using namespace: kubernetes-dashboard
	2025/10/25 09:35:03 Using in-cluster config to connect to apiserver
	2025/10/25 09:35:03 Using secret token for csrf signing
	2025/10/25 09:35:03 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/25 09:35:03 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/25 09:35:03 Successful initial request to the apiserver, version: v1.34.1
	2025/10/25 09:35:03 Generating JWE encryption key
	2025/10/25 09:35:03 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/25 09:35:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/25 09:35:04 Initializing JWE encryption key from synchronized object
	2025/10/25 09:35:04 Creating in-cluster Sidecar client
	2025/10/25 09:35:04 Serving insecurely on HTTP port: 9090
	2025/10/25 09:35:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 09:35:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 09:35:03 Starting overwatch
	
	
	==> storage-provisioner [1cfe71fd1fe7a1b6bea21b990d2a3dbcc5dd1b17f294993c302c09956f95be67] <==
	I1025 09:34:49.872533       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1025 09:35:19.881533       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [554118d5ba9b888eff74911aeb9bc49200cf2e408aa2932423cd99c6fddc0070] <==
	I1025 09:35:20.474373       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 09:35:20.491911       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 09:35:20.492055       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1025 09:35:20.495319       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:35:23.950265       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:35:28.213843       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:35:31.812836       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:35:34.867424       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:35:37.890502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:35:37.903028       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 09:35:37.906484       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 09:35:37.919777       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-173264_3dc15f09-793c-4197-8ad1-b3e5f1085083!
	I1025 09:35:37.907030       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f18b8aca-cedc-4b01-aff4-d043bcc5db0c", APIVersion:"v1", ResourceVersion:"686", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-173264_3dc15f09-793c-4197-8ad1-b3e5f1085083 became leader
	W1025 09:35:37.932982       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:35:37.954513       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 09:35:38.020785       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-173264_3dc15f09-793c-4197-8ad1-b3e5f1085083!
	W1025 09:35:39.969161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:35:39.974432       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:35:41.990561       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:35:42.016950       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-173264 -n embed-certs-173264
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-173264 -n embed-certs-173264: exit status 2 (532.19967ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-173264 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (7.93s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-052144 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-052144 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (290.956528ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:36:27Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-052144 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-052144
helpers_test.go:243: (dbg) docker inspect newest-cni-052144:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e1443cadde6d14168556fbd4f8f5d66ca11ccfbb99aefdca62b5041d9d96f21a",
	        "Created": "2025-10-25T09:35:54.490444314Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 207966,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:35:54.554380092Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/e1443cadde6d14168556fbd4f8f5d66ca11ccfbb99aefdca62b5041d9d96f21a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e1443cadde6d14168556fbd4f8f5d66ca11ccfbb99aefdca62b5041d9d96f21a/hostname",
	        "HostsPath": "/var/lib/docker/containers/e1443cadde6d14168556fbd4f8f5d66ca11ccfbb99aefdca62b5041d9d96f21a/hosts",
	        "LogPath": "/var/lib/docker/containers/e1443cadde6d14168556fbd4f8f5d66ca11ccfbb99aefdca62b5041d9d96f21a/e1443cadde6d14168556fbd4f8f5d66ca11ccfbb99aefdca62b5041d9d96f21a-json.log",
	        "Name": "/newest-cni-052144",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-052144:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-052144",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e1443cadde6d14168556fbd4f8f5d66ca11ccfbb99aefdca62b5041d9d96f21a",
	                "LowerDir": "/var/lib/docker/overlay2/728840ee83faeb82b3599e3ae5f94f455fef7897c1a1e3a5bbf2533eeeba4cf0-init/diff:/var/lib/docker/overlay2/cef79fc16ca3e257f9c9390d34fae091a7f7fa219913b2606caf1922ea50ed93/diff",
	                "MergedDir": "/var/lib/docker/overlay2/728840ee83faeb82b3599e3ae5f94f455fef7897c1a1e3a5bbf2533eeeba4cf0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/728840ee83faeb82b3599e3ae5f94f455fef7897c1a1e3a5bbf2533eeeba4cf0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/728840ee83faeb82b3599e3ae5f94f455fef7897c1a1e3a5bbf2533eeeba4cf0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-052144",
	                "Source": "/var/lib/docker/volumes/newest-cni-052144/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-052144",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-052144",
	                "name.minikube.sigs.k8s.io": "newest-cni-052144",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "31af83b63620168ddc03473cf209572e334b6c9dc4d4c93638f5c697ad0f2d14",
	            "SandboxKey": "/var/run/docker/netns/31af83b63620",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-052144": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5e:27:75:88:02:b0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b84d6961f80e53c1b14499713a231fa50516df29401cc2dc41dc3be0b29a7d71",
	                    "EndpointID": "6f7d1e0d68bbb6206a547704e8ff2972e008a2a8de13c852058d6f6117c80265",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-052144",
	                        "e1443cadde6d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-052144 -n newest-cni-052144
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-052144 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-052144 logs -n 25: (1.097818605s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p old-k8s-version-881642                                                                                                                                                                                                                     │ old-k8s-version-881642       │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:32 UTC │
	│ delete  │ -p old-k8s-version-881642                                                                                                                                                                                                                     │ old-k8s-version-881642       │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:32 UTC │
	│ start   │ -p no-preload-179869 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-179869            │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:33 UTC │
	│ delete  │ -p cert-expiration-440252                                                                                                                                                                                                                     │ cert-expiration-440252       │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:32 UTC │
	│ start   │ -p embed-certs-173264 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-173264           │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:34 UTC │
	│ addons  │ enable metrics-server -p no-preload-179869 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-179869            │ jenkins │ v1.37.0 │ 25 Oct 25 09:33 UTC │                     │
	│ stop    │ -p no-preload-179869 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-179869            │ jenkins │ v1.37.0 │ 25 Oct 25 09:33 UTC │ 25 Oct 25 09:34 UTC │
	│ addons  │ enable dashboard -p no-preload-179869 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-179869            │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │ 25 Oct 25 09:34 UTC │
	│ start   │ -p no-preload-179869 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-179869            │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │ 25 Oct 25 09:34 UTC │
	│ addons  │ enable metrics-server -p embed-certs-173264 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-173264           │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │                     │
	│ stop    │ -p embed-certs-173264 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-173264           │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │ 25 Oct 25 09:34 UTC │
	│ addons  │ enable dashboard -p embed-certs-173264 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-173264           │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │ 25 Oct 25 09:34 UTC │
	│ start   │ -p embed-certs-173264 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-173264           │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │ 25 Oct 25 09:35 UTC │
	│ image   │ no-preload-179869 image list --format=json                                                                                                                                                                                                    │ no-preload-179869            │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
	│ pause   │ -p no-preload-179869 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-179869            │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │                     │
	│ delete  │ -p no-preload-179869                                                                                                                                                                                                                          │ no-preload-179869            │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
	│ delete  │ -p no-preload-179869                                                                                                                                                                                                                          │ no-preload-179869            │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
	│ delete  │ -p disable-driver-mounts-901717                                                                                                                                                                                                               │ disable-driver-mounts-901717 │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
	│ start   │ -p default-k8s-diff-port-666079 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-666079 │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │                     │
	│ image   │ embed-certs-173264 image list --format=json                                                                                                                                                                                                   │ embed-certs-173264           │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
	│ pause   │ -p embed-certs-173264 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-173264           │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │                     │
	│ delete  │ -p embed-certs-173264                                                                                                                                                                                                                         │ embed-certs-173264           │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
	│ delete  │ -p embed-certs-173264                                                                                                                                                                                                                         │ embed-certs-173264           │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
	│ start   │ -p newest-cni-052144 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-052144            │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:36 UTC │
	│ addons  │ enable metrics-server -p newest-cni-052144 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-052144            │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:35:48
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:35:48.316598  207481 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:35:48.317193  207481 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:35:48.317228  207481 out.go:374] Setting ErrFile to fd 2...
	I1025 09:35:48.317248  207481 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:35:48.317536  207481 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-2312/.minikube/bin
	I1025 09:35:48.318065  207481 out.go:368] Setting JSON to false
	I1025 09:35:48.319006  207481 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4700,"bootTime":1761380249,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1025 09:35:48.319099  207481 start.go:141] virtualization:  
	I1025 09:35:48.322974  207481 out.go:179] * [newest-cni-052144] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 09:35:48.326024  207481 notify.go:220] Checking for updates...
	I1025 09:35:48.330481  207481 out.go:179]   - MINIKUBE_LOCATION=21796
	I1025 09:35:48.333448  207481 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:35:48.335930  207481 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21796-2312/kubeconfig
	I1025 09:35:48.340540  207481 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-2312/.minikube
	I1025 09:35:48.343384  207481 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 09:35:48.346284  207481 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:35:48.349539  207481 config.go:182] Loaded profile config "default-k8s-diff-port-666079": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:35:48.349694  207481 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:35:48.391154  207481 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 09:35:48.391276  207481 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:35:48.483756  207481 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 09:35:48.474263937 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:35:48.483857  207481 docker.go:318] overlay module found
	I1025 09:35:48.488320  207481 out.go:179] * Using the docker driver based on user configuration
	I1025 09:35:48.491197  207481 start.go:305] selected driver: docker
	I1025 09:35:48.491217  207481 start.go:925] validating driver "docker" against <nil>
	I1025 09:35:48.491231  207481 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:35:48.491930  207481 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:35:48.577787  207481 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 09:35:48.565747613 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:35:48.577943  207481 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1025 09:35:48.577966  207481 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1025 09:35:48.578199  207481 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1025 09:35:48.581172  207481 out.go:179] * Using Docker driver with root privileges
	I1025 09:35:48.584010  207481 cni.go:84] Creating CNI manager for ""
	I1025 09:35:48.584080  207481 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:35:48.584088  207481 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 09:35:48.584175  207481 start.go:349] cluster config:
	{Name:newest-cni-052144 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-052144 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:35:48.587336  207481 out.go:179] * Starting "newest-cni-052144" primary control-plane node in "newest-cni-052144" cluster
	I1025 09:35:48.590121  207481 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 09:35:48.592964  207481 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:35:48.595805  207481 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:35:48.595866  207481 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21796-2312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1025 09:35:48.595876  207481 cache.go:58] Caching tarball of preloaded images
	I1025 09:35:48.595982  207481 preload.go:233] Found /home/jenkins/minikube-integration/21796-2312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 09:35:48.595992  207481 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 09:35:48.596101  207481 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/newest-cni-052144/config.json ...
	I1025 09:35:48.596118  207481 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/newest-cni-052144/config.json: {Name:mk8d81b72785bca5f751952878d3207cbffe5fe3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:35:48.596274  207481 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:35:48.623743  207481 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 09:35:48.623764  207481 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 09:35:48.623777  207481 cache.go:232] Successfully downloaded all kic artifacts
	I1025 09:35:48.623799  207481 start.go:360] acquireMachinesLock for newest-cni-052144: {Name:mkdc11ad68e6ad5dad60c6abaa6ced1c93cec008 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:35:48.623893  207481 start.go:364] duration metric: took 79.082µs to acquireMachinesLock for "newest-cni-052144"
	I1025 09:35:48.623917  207481 start.go:93] Provisioning new machine with config: &{Name:newest-cni-052144 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-052144 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:35:48.623988  207481 start.go:125] createHost starting for "" (driver="docker")
	I1025 09:35:46.801089  203993 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 5.734698722s
	I1025 09:35:48.656082  203993 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 7.59477414s
	I1025 09:35:50.063794  203993 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 9.002368269s
	I1025 09:35:50.100188  203993 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 09:35:50.116284  203993 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 09:35:50.135994  203993 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 09:35:50.136440  203993 kubeadm.go:318] [mark-control-plane] Marking the node default-k8s-diff-port-666079 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 09:35:50.151166  203993 kubeadm.go:318] [bootstrap-token] Using token: u2b266.s99pm55gol4521ma
	I1025 09:35:50.154294  203993 out.go:252]   - Configuring RBAC rules ...
	I1025 09:35:50.154426  203993 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 09:35:50.162037  203993 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 09:35:50.171793  203993 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 09:35:50.177261  203993 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 09:35:50.182902  203993 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 09:35:50.192428  203993 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 09:35:50.472299  203993 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 09:35:50.924006  203993 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1025 09:35:51.477673  203993 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1025 09:35:51.479293  203993 kubeadm.go:318] 
	I1025 09:35:51.479376  203993 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1025 09:35:51.479382  203993 kubeadm.go:318] 
	I1025 09:35:51.479483  203993 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1025 09:35:51.479489  203993 kubeadm.go:318] 
	I1025 09:35:51.479515  203993 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1025 09:35:51.479974  203993 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 09:35:51.480040  203993 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 09:35:51.480046  203993 kubeadm.go:318] 
	I1025 09:35:51.480103  203993 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1025 09:35:51.480107  203993 kubeadm.go:318] 
	I1025 09:35:51.480156  203993 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 09:35:51.480161  203993 kubeadm.go:318] 
	I1025 09:35:51.480215  203993 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1025 09:35:51.480293  203993 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 09:35:51.480367  203993 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 09:35:51.480371  203993 kubeadm.go:318] 
	I1025 09:35:51.480690  203993 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 09:35:51.480776  203993 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1025 09:35:51.480791  203993 kubeadm.go:318] 
	I1025 09:35:51.481079  203993 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8444 --token u2b266.s99pm55gol4521ma \
	I1025 09:35:51.481192  203993 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:e54eda66e4b00d0a990c315bf332782d8922fe6750f8b2bc2791b23f9457095b \
	I1025 09:35:51.481402  203993 kubeadm.go:318] 	--control-plane 
	I1025 09:35:51.481413  203993 kubeadm.go:318] 
	I1025 09:35:51.481687  203993 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1025 09:35:51.481696  203993 kubeadm.go:318] 
	I1025 09:35:51.481996  203993 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8444 --token u2b266.s99pm55gol4521ma \
	I1025 09:35:51.482304  203993 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:e54eda66e4b00d0a990c315bf332782d8922fe6750f8b2bc2791b23f9457095b 
	I1025 09:35:51.488021  203993 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1025 09:35:51.488254  203993 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1025 09:35:51.488363  203993 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 09:35:51.488378  203993 cni.go:84] Creating CNI manager for ""
	I1025 09:35:51.488385  203993 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:35:51.491840  203993 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1025 09:35:51.495143  203993 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 09:35:51.507947  203993 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1025 09:35:51.507970  203993 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1025 09:35:51.539261  203993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 09:35:48.627283  207481 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1025 09:35:48.627504  207481 start.go:159] libmachine.API.Create for "newest-cni-052144" (driver="docker")
	I1025 09:35:48.627544  207481 client.go:168] LocalClient.Create starting
	I1025 09:35:48.627607  207481 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem
	I1025 09:35:48.627645  207481 main.go:141] libmachine: Decoding PEM data...
	I1025 09:35:48.627661  207481 main.go:141] libmachine: Parsing certificate...
	I1025 09:35:48.627713  207481 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem
	I1025 09:35:48.627732  207481 main.go:141] libmachine: Decoding PEM data...
	I1025 09:35:48.627744  207481 main.go:141] libmachine: Parsing certificate...
	I1025 09:35:48.628118  207481 cli_runner.go:164] Run: docker network inspect newest-cni-052144 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 09:35:48.650312  207481 cli_runner.go:211] docker network inspect newest-cni-052144 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 09:35:48.650398  207481 network_create.go:284] running [docker network inspect newest-cni-052144] to gather additional debugging logs...
	I1025 09:35:48.650414  207481 cli_runner.go:164] Run: docker network inspect newest-cni-052144
	W1025 09:35:48.671713  207481 cli_runner.go:211] docker network inspect newest-cni-052144 returned with exit code 1
	I1025 09:35:48.671746  207481 network_create.go:287] error running [docker network inspect newest-cni-052144]: docker network inspect newest-cni-052144: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-052144 not found
	I1025 09:35:48.671771  207481 network_create.go:289] output of [docker network inspect newest-cni-052144]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-052144 not found
	
	** /stderr **
	I1025 09:35:48.671867  207481 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:35:48.692234  207481 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-4076b76bdd01 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5e:93:ad:e4:3e:11} reservation:<nil>}
	I1025 09:35:48.692537  207481 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ab40ae949743 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ea:83:23:78:ca:4d} reservation:<nil>}
	I1025 09:35:48.692924  207481 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ff3fdd90dcc2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8e:d4:a3:43:c3:da} reservation:<nil>}
	I1025 09:35:48.693159  207481 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-fca20c11b6d7 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:96:e8:00:97:a4:0d} reservation:<nil>}
	I1025 09:35:48.694623  207481 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019b5690}
	I1025 09:35:48.694652  207481 network_create.go:124] attempt to create docker network newest-cni-052144 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1025 09:35:48.694723  207481 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-052144 newest-cni-052144
	I1025 09:35:48.762818  207481 network_create.go:108] docker network newest-cni-052144 192.168.85.0/24 created
	I1025 09:35:48.762852  207481 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-052144" container
	I1025 09:35:48.762929  207481 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 09:35:48.781601  207481 cli_runner.go:164] Run: docker volume create newest-cni-052144 --label name.minikube.sigs.k8s.io=newest-cni-052144 --label created_by.minikube.sigs.k8s.io=true
	I1025 09:35:48.800242  207481 oci.go:103] Successfully created a docker volume newest-cni-052144
	I1025 09:35:48.800325  207481 cli_runner.go:164] Run: docker run --rm --name newest-cni-052144-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-052144 --entrypoint /usr/bin/test -v newest-cni-052144:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1025 09:35:49.427911  207481 oci.go:107] Successfully prepared a docker volume newest-cni-052144
	I1025 09:35:49.427961  207481 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:35:49.427980  207481 kic.go:194] Starting extracting preloaded images to volume ...
	I1025 09:35:49.428312  207481 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21796-2312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-052144:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1025 09:35:52.021749  203993 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 09:35:52.021914  203993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:35:52.022018  203993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-666079 minikube.k8s.io/updated_at=2025_10_25T09_35_52_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c3de5ce1075e776fa55a26fe8396669cc53a4373 minikube.k8s.io/name=default-k8s-diff-port-666079 minikube.k8s.io/primary=true
	I1025 09:35:52.464722  203993 ops.go:34] apiserver oom_adj: -16
	I1025 09:35:52.464842  203993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:35:52.965519  203993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:35:53.465916  203993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:35:53.965099  203993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:35:54.465036  203993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:35:54.965268  203993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:35:55.465198  203993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:35:55.965488  203993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:35:56.063051  203993 kubeadm.go:1113] duration metric: took 4.041186343s to wait for elevateKubeSystemPrivileges
	I1025 09:35:56.063082  203993 kubeadm.go:402] duration metric: took 23.469766448s to StartCluster
	I1025 09:35:56.063099  203993 settings.go:142] acquiring lock: {Name:mk0d8a31751f5e359928da7d7271367bd4b397fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:35:56.063176  203993 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21796-2312/kubeconfig
	I1025 09:35:56.063824  203993 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/kubeconfig: {Name:mk29749a0edbb941c188588ea6aef25a517ab571 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:35:56.064038  203993 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:35:56.064118  203993 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 09:35:56.064374  203993 config.go:182] Loaded profile config "default-k8s-diff-port-666079": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:35:56.064411  203993 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 09:35:56.064467  203993 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-666079"
	I1025 09:35:56.064484  203993 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-666079"
	I1025 09:35:56.064493  203993 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-666079"
	I1025 09:35:56.064505  203993 host.go:66] Checking if "default-k8s-diff-port-666079" exists ...
	I1025 09:35:56.064512  203993 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-666079"
	I1025 09:35:56.064829  203993 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-666079 --format={{.State.Status}}
	I1025 09:35:56.065111  203993 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-666079 --format={{.State.Status}}
	I1025 09:35:56.068353  203993 out.go:179] * Verifying Kubernetes components...
	I1025 09:35:56.076260  203993 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:35:56.100509  203993 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 09:35:56.102321  203993 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-666079"
	I1025 09:35:56.102355  203993 host.go:66] Checking if "default-k8s-diff-port-666079" exists ...
	I1025 09:35:56.102779  203993 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-666079 --format={{.State.Status}}
	I1025 09:35:56.103380  203993 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:35:56.103401  203993 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 09:35:56.103457  203993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-666079
	I1025 09:35:56.150021  203993 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/default-k8s-diff-port-666079/id_rsa Username:docker}
	I1025 09:35:56.150081  203993 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 09:35:56.150094  203993 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 09:35:56.150149  203993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-666079
	I1025 09:35:56.177283  203993 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/default-k8s-diff-port-666079/id_rsa Username:docker}
	I1025 09:35:56.323288  203993 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 09:35:56.374892  203993 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:35:56.454608  203993 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 09:35:56.575584  203993 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:35:57.345877  203993 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.022553753s)
	I1025 09:35:57.345905  203993 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1025 09:35:57.346772  203993 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-666079" to be "Ready" ...
	I1025 09:35:57.560127  203993 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1025 09:35:54.403889  207481 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21796-2312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-052144:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.975541468s)
	I1025 09:35:54.403920  207481 kic.go:203] duration metric: took 4.975936066s to extract preloaded images to volume ...
	W1025 09:35:54.404069  207481 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1025 09:35:54.404179  207481 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 09:35:54.472983  207481 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-052144 --name newest-cni-052144 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-052144 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-052144 --network newest-cni-052144 --ip 192.168.85.2 --volume newest-cni-052144:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1025 09:35:54.799228  207481 cli_runner.go:164] Run: docker container inspect newest-cni-052144 --format={{.State.Running}}
	I1025 09:35:54.819769  207481 cli_runner.go:164] Run: docker container inspect newest-cni-052144 --format={{.State.Status}}
	I1025 09:35:54.843230  207481 cli_runner.go:164] Run: docker exec newest-cni-052144 stat /var/lib/dpkg/alternatives/iptables
	I1025 09:35:54.903740  207481 oci.go:144] the created container "newest-cni-052144" has a running status.
	I1025 09:35:54.903786  207481 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21796-2312/.minikube/machines/newest-cni-052144/id_rsa...
	I1025 09:35:55.611348  207481 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21796-2312/.minikube/machines/newest-cni-052144/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 09:35:55.647499  207481 cli_runner.go:164] Run: docker container inspect newest-cni-052144 --format={{.State.Status}}
	I1025 09:35:55.672146  207481 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 09:35:55.672170  207481 kic_runner.go:114] Args: [docker exec --privileged newest-cni-052144 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 09:35:55.741320  207481 cli_runner.go:164] Run: docker container inspect newest-cni-052144 --format={{.State.Status}}
	I1025 09:35:55.765878  207481 machine.go:93] provisionDockerMachine start ...
	I1025 09:35:55.765971  207481 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-052144
	I1025 09:35:55.786670  207481 main.go:141] libmachine: Using SSH client type: native
	I1025 09:35:55.787016  207481 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1025 09:35:55.787033  207481 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:35:55.787760  207481 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1025 09:35:57.563080  203993 addons.go:514] duration metric: took 1.498657161s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1025 09:35:57.851870  203993 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-666079" context rescaled to 1 replicas
	W1025 09:35:59.350132  203993 node_ready.go:57] node "default-k8s-diff-port-666079" has "Ready":"False" status (will retry)
	I1025 09:35:58.942286  207481 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-052144
	
	I1025 09:35:58.942309  207481 ubuntu.go:182] provisioning hostname "newest-cni-052144"
	I1025 09:35:58.942373  207481 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-052144
	I1025 09:35:58.962838  207481 main.go:141] libmachine: Using SSH client type: native
	I1025 09:35:58.963168  207481 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1025 09:35:58.963187  207481 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-052144 && echo "newest-cni-052144" | sudo tee /etc/hostname
	I1025 09:35:59.124638  207481 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-052144
	
	I1025 09:35:59.124727  207481 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-052144
	I1025 09:35:59.142778  207481 main.go:141] libmachine: Using SSH client type: native
	I1025 09:35:59.143141  207481 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1025 09:35:59.143166  207481 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-052144' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-052144/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-052144' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:35:59.294062  207481 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:35:59.294150  207481 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21796-2312/.minikube CaCertPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21796-2312/.minikube}
	I1025 09:35:59.294174  207481 ubuntu.go:190] setting up certificates
	I1025 09:35:59.294184  207481 provision.go:84] configureAuth start
	I1025 09:35:59.294265  207481 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-052144
	I1025 09:35:59.311498  207481 provision.go:143] copyHostCerts
	I1025 09:35:59.311560  207481 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-2312/.minikube/ca.pem, removing ...
	I1025 09:35:59.311582  207481 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-2312/.minikube/ca.pem
	I1025 09:35:59.311663  207481 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21796-2312/.minikube/ca.pem (1082 bytes)
	I1025 09:35:59.311755  207481 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-2312/.minikube/cert.pem, removing ...
	I1025 09:35:59.311768  207481 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-2312/.minikube/cert.pem
	I1025 09:35:59.311796  207481 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21796-2312/.minikube/cert.pem (1123 bytes)
	I1025 09:35:59.311854  207481 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-2312/.minikube/key.pem, removing ...
	I1025 09:35:59.311862  207481 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-2312/.minikube/key.pem
	I1025 09:35:59.311886  207481 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21796-2312/.minikube/key.pem (1679 bytes)
	I1025 09:35:59.311935  207481 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21796-2312/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca-key.pem org=jenkins.newest-cni-052144 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-052144]
	I1025 09:35:59.397970  207481 provision.go:177] copyRemoteCerts
	I1025 09:35:59.398055  207481 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:35:59.398107  207481 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-052144
	I1025 09:35:59.414971  207481 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/newest-cni-052144/id_rsa Username:docker}
	I1025 09:35:59.521964  207481 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 09:35:59.540399  207481 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1025 09:35:59.558558  207481 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 09:35:59.576248  207481 provision.go:87] duration metric: took 282.04295ms to configureAuth
	I1025 09:35:59.576275  207481 ubuntu.go:206] setting minikube options for container-runtime
	I1025 09:35:59.576501  207481 config.go:182] Loaded profile config "newest-cni-052144": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:35:59.576625  207481 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-052144
	I1025 09:35:59.593749  207481 main.go:141] libmachine: Using SSH client type: native
	I1025 09:35:59.594158  207481 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1025 09:35:59.594183  207481 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 09:35:59.869012  207481 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:35:59.869033  207481 machine.go:96] duration metric: took 4.103131587s to provisionDockerMachine
	I1025 09:35:59.869043  207481 client.go:171] duration metric: took 11.241492908s to LocalClient.Create
	I1025 09:35:59.869056  207481 start.go:167] duration metric: took 11.241553561s to libmachine.API.Create "newest-cni-052144"
	I1025 09:35:59.869063  207481 start.go:293] postStartSetup for "newest-cni-052144" (driver="docker")
	I1025 09:35:59.869088  207481 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:35:59.869162  207481 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:35:59.869213  207481 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-052144
	I1025 09:35:59.887928  207481 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/newest-cni-052144/id_rsa Username:docker}
	I1025 09:35:59.994700  207481 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:35:59.998119  207481 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 09:35:59.998189  207481 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 09:35:59.998206  207481 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-2312/.minikube/addons for local assets ...
	I1025 09:35:59.998268  207481 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-2312/.minikube/files for local assets ...
	I1025 09:35:59.998349  207481 filesync.go:149] local asset: /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem -> 41102.pem in /etc/ssl/certs
	I1025 09:35:59.998466  207481 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 09:36:00.023138  207481 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem --> /etc/ssl/certs/41102.pem (1708 bytes)
	I1025 09:36:00.157537  207481 start.go:296] duration metric: took 288.457711ms for postStartSetup
	I1025 09:36:00.158681  207481 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-052144
	I1025 09:36:00.296651  207481 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/newest-cni-052144/config.json ...
	I1025 09:36:00.297044  207481 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:36:00.297093  207481 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-052144
	I1025 09:36:00.333205  207481 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/newest-cni-052144/id_rsa Username:docker}
	I1025 09:36:00.494989  207481 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 09:36:00.501427  207481 start.go:128] duration metric: took 11.877423801s to createHost
	I1025 09:36:00.501454  207481 start.go:83] releasing machines lock for "newest-cni-052144", held for 11.877552352s
	I1025 09:36:00.501542  207481 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-052144
	I1025 09:36:00.557823  207481 ssh_runner.go:195] Run: cat /version.json
	I1025 09:36:00.558172  207481 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:36:00.558554  207481 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-052144
	I1025 09:36:00.561600  207481 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-052144
	I1025 09:36:00.598197  207481 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/newest-cni-052144/id_rsa Username:docker}
	I1025 09:36:00.603034  207481 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/newest-cni-052144/id_rsa Username:docker}
	I1025 09:36:00.713888  207481 ssh_runner.go:195] Run: systemctl --version
	I1025 09:36:00.808796  207481 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:36:00.845976  207481 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:36:00.850923  207481 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:36:00.850993  207481 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:36:00.879257  207481 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1025 09:36:00.879328  207481 start.go:495] detecting cgroup driver to use...
	I1025 09:36:00.879366  207481 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 09:36:00.879425  207481 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:36:00.898090  207481 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:36:00.911710  207481 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:36:00.911830  207481 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:36:00.934294  207481 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:36:00.955217  207481 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:36:01.111798  207481 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:36:01.243877  207481 docker.go:234] disabling docker service ...
	I1025 09:36:01.243961  207481 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:36:01.272080  207481 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:36:01.291202  207481 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:36:01.423095  207481 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:36:01.548036  207481 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:36:01.560957  207481 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:36:01.575083  207481 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 09:36:01.575162  207481 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:36:01.584558  207481 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 09:36:01.584667  207481 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:36:01.593534  207481 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:36:01.602391  207481 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:36:01.611197  207481 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:36:01.620905  207481 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:36:01.629949  207481 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:36:01.644403  207481 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:36:01.653534  207481 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:36:01.661192  207481 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:36:01.668952  207481 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:36:01.792754  207481 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:36:01.914637  207481 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:36:01.914744  207481 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:36:01.918825  207481 start.go:563] Will wait 60s for crictl version
	I1025 09:36:01.918893  207481 ssh_runner.go:195] Run: which crictl
	I1025 09:36:01.922573  207481 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 09:36:01.948070  207481 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 09:36:01.948156  207481 ssh_runner.go:195] Run: crio --version
	I1025 09:36:01.981267  207481 ssh_runner.go:195] Run: crio --version
	I1025 09:36:02.015676  207481 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 09:36:02.018713  207481 cli_runner.go:164] Run: docker network inspect newest-cni-052144 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:36:02.041054  207481 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1025 09:36:02.045331  207481 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:36:02.060322  207481 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1025 09:36:02.063342  207481 kubeadm.go:883] updating cluster {Name:newest-cni-052144 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-052144 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 09:36:02.063493  207481 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:36:02.063572  207481 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:36:02.103147  207481 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:36:02.103171  207481 crio.go:433] Images already preloaded, skipping extraction
	I1025 09:36:02.103229  207481 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:36:02.129223  207481 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:36:02.129245  207481 cache_images.go:85] Images are preloaded, skipping loading
	I1025 09:36:02.129252  207481 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1025 09:36:02.129336  207481 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-052144 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-052144 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 09:36:02.129426  207481 ssh_runner.go:195] Run: crio config
	I1025 09:36:02.197004  207481 cni.go:84] Creating CNI manager for ""
	I1025 09:36:02.197029  207481 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:36:02.197057  207481 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1025 09:36:02.197090  207481 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-052144 NodeName:newest-cni-052144 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:36:02.197254  207481 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-052144"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:36:02.197339  207481 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 09:36:02.205041  207481 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 09:36:02.205131  207481 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:36:02.212758  207481 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1025 09:36:02.225374  207481 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:36:02.238874  207481 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1025 09:36:02.252604  207481 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1025 09:36:02.256121  207481 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:36:02.265851  207481 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:36:02.384073  207481 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:36:02.401404  207481 certs.go:69] Setting up /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/newest-cni-052144 for IP: 192.168.85.2
	I1025 09:36:02.401427  207481 certs.go:195] generating shared ca certs ...
	I1025 09:36:02.401444  207481 certs.go:227] acquiring lock for ca certs: {Name:mke81a91ad41248e67f7acba9fb2e71e1c110e7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:36:02.401582  207481 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21796-2312/.minikube/ca.key
	I1025 09:36:02.401639  207481 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.key
	I1025 09:36:02.401649  207481 certs.go:257] generating profile certs ...
	I1025 09:36:02.401714  207481 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/newest-cni-052144/client.key
	I1025 09:36:02.401737  207481 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/newest-cni-052144/client.crt with IP's: []
	I1025 09:36:02.680213  207481 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/newest-cni-052144/client.crt ...
	I1025 09:36:02.680246  207481 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/newest-cni-052144/client.crt: {Name:mk6b2e263432173b8b1387803deefe084df0a050 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:36:02.680498  207481 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/newest-cni-052144/client.key ...
	I1025 09:36:02.680512  207481 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/newest-cni-052144/client.key: {Name:mk3fd7478dbff12ba149cb059089660160235d93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:36:02.680615  207481 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/newest-cni-052144/apiserver.key.45317619
	I1025 09:36:02.680631  207481 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/newest-cni-052144/apiserver.crt.45317619 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1025 09:36:03.018275  207481 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/newest-cni-052144/apiserver.crt.45317619 ...
	I1025 09:36:03.018315  207481 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/newest-cni-052144/apiserver.crt.45317619: {Name:mkf801b8b2962fb49ff3bedfff3aea3e8f7c5bc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:36:03.018500  207481 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/newest-cni-052144/apiserver.key.45317619 ...
	I1025 09:36:03.018520  207481 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/newest-cni-052144/apiserver.key.45317619: {Name:mkc15ae32d2be88df4678bef278276c3f1715185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:36:03.018632  207481 certs.go:382] copying /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/newest-cni-052144/apiserver.crt.45317619 -> /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/newest-cni-052144/apiserver.crt
	I1025 09:36:03.018730  207481 certs.go:386] copying /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/newest-cni-052144/apiserver.key.45317619 -> /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/newest-cni-052144/apiserver.key
	I1025 09:36:03.018800  207481 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/newest-cni-052144/proxy-client.key
	I1025 09:36:03.018822  207481 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/newest-cni-052144/proxy-client.crt with IP's: []
	I1025 09:36:03.844436  207481 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/newest-cni-052144/proxy-client.crt ...
	I1025 09:36:03.844470  207481 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/newest-cni-052144/proxy-client.crt: {Name:mk03c57dcfd67becf67ccff5947d1b4779a0bff6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:36:03.844665  207481 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/newest-cni-052144/proxy-client.key ...
	I1025 09:36:03.844679  207481 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/newest-cni-052144/proxy-client.key: {Name:mkc0854485ffa71c6afa54567fc0d2ca06662261 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:36:03.844886  207481 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/4110.pem (1338 bytes)
	W1025 09:36:03.844932  207481 certs.go:480] ignoring /home/jenkins/minikube-integration/21796-2312/.minikube/certs/4110_empty.pem, impossibly tiny 0 bytes
	I1025 09:36:03.844941  207481 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 09:36:03.844966  207481 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem (1082 bytes)
	I1025 09:36:03.844999  207481 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:36:03.845023  207481 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/key.pem (1679 bytes)
	I1025 09:36:03.845071  207481 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem (1708 bytes)
	I1025 09:36:03.845644  207481 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:36:03.865061  207481 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 09:36:03.884029  207481 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:36:03.910147  207481 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 09:36:03.929920  207481 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/newest-cni-052144/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1025 09:36:03.949404  207481 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/newest-cni-052144/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 09:36:03.979922  207481 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/newest-cni-052144/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:36:03.998550  207481 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/newest-cni-052144/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1025 09:36:04.021867  207481 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem --> /usr/share/ca-certificates/41102.pem (1708 bytes)
	I1025 09:36:04.051773  207481 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:36:04.071176  207481 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/certs/4110.pem --> /usr/share/ca-certificates/4110.pem (1338 bytes)
	I1025 09:36:04.090040  207481 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:36:04.104764  207481 ssh_runner.go:195] Run: openssl version
	I1025 09:36:04.111414  207481 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41102.pem && ln -fs /usr/share/ca-certificates/41102.pem /etc/ssl/certs/41102.pem"
	I1025 09:36:04.120213  207481 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41102.pem
	I1025 09:36:04.124157  207481 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 08:37 /usr/share/ca-certificates/41102.pem
	I1025 09:36:04.124266  207481 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41102.pem
	I1025 09:36:04.165291  207481 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41102.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 09:36:04.173958  207481 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:36:04.182597  207481 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:36:04.186396  207481 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:36:04.186465  207481 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:36:04.227933  207481 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:36:04.236981  207481 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4110.pem && ln -fs /usr/share/ca-certificates/4110.pem /etc/ssl/certs/4110.pem"
	I1025 09:36:04.246038  207481 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4110.pem
	I1025 09:36:04.249801  207481 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 08:37 /usr/share/ca-certificates/4110.pem
	I1025 09:36:04.249868  207481 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4110.pem
	I1025 09:36:04.295400  207481 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4110.pem /etc/ssl/certs/51391683.0"
	I1025 09:36:04.304549  207481 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:36:04.308208  207481 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 09:36:04.308275  207481 kubeadm.go:400] StartCluster: {Name:newest-cni-052144 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-052144 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:36:04.308371  207481 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:36:04.308433  207481 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:36:04.335449  207481 cri.go:89] found id: ""
	I1025 09:36:04.335529  207481 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 09:36:04.343591  207481 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 09:36:04.352633  207481 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1025 09:36:04.352699  207481 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 09:36:04.361572  207481 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 09:36:04.361602  207481 kubeadm.go:157] found existing configuration files:
	
	I1025 09:36:04.361675  207481 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 09:36:04.369524  207481 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 09:36:04.369595  207481 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 09:36:04.377256  207481 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 09:36:04.384955  207481 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 09:36:04.385043  207481 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 09:36:04.393084  207481 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 09:36:04.400619  207481 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 09:36:04.400701  207481 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 09:36:04.408094  207481 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 09:36:04.416092  207481 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 09:36:04.416187  207481 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 09:36:04.424586  207481 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 09:36:04.465834  207481 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1025 09:36:04.465936  207481 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 09:36:04.494091  207481 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1025 09:36:04.494208  207481 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1025 09:36:04.494262  207481 kubeadm.go:318] OS: Linux
	I1025 09:36:04.494351  207481 kubeadm.go:318] CGROUPS_CPU: enabled
	I1025 09:36:04.494435  207481 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1025 09:36:04.494508  207481 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1025 09:36:04.494575  207481 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1025 09:36:04.494649  207481 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1025 09:36:04.494722  207481 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1025 09:36:04.494792  207481 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1025 09:36:04.494869  207481 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1025 09:36:04.494935  207481 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1025 09:36:04.575377  207481 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 09:36:04.575544  207481 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 09:36:04.575671  207481 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 09:36:04.586355  207481 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1025 09:36:01.849774  203993 node_ready.go:57] node "default-k8s-diff-port-666079" has "Ready":"False" status (will retry)
	W1025 09:36:03.851384  203993 node_ready.go:57] node "default-k8s-diff-port-666079" has "Ready":"False" status (will retry)
	W1025 09:36:06.351585  203993 node_ready.go:57] node "default-k8s-diff-port-666079" has "Ready":"False" status (will retry)
	I1025 09:36:04.592229  207481 out.go:252]   - Generating certificates and keys ...
	I1025 09:36:04.592368  207481 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 09:36:04.592468  207481 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 09:36:05.285343  207481 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 09:36:05.328222  207481 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 09:36:05.882621  207481 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 09:36:06.814223  207481 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 09:36:07.380529  207481 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 09:36:07.380885  207481 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-052144] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1025 09:36:07.650432  207481 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 09:36:07.650628  207481 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-052144] and IPs [192.168.85.2 127.0.0.1 ::1]
	W1025 09:36:08.850496  203993 node_ready.go:57] node "default-k8s-diff-port-666079" has "Ready":"False" status (will retry)
	W1025 09:36:10.850678  203993 node_ready.go:57] node "default-k8s-diff-port-666079" has "Ready":"False" status (will retry)
	I1025 09:36:09.065392  207481 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 09:36:09.780756  207481 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 09:36:10.306118  207481 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 09:36:10.306420  207481 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 09:36:10.421930  207481 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 09:36:11.066189  207481 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1025 09:36:11.988880  207481 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 09:36:12.749478  207481 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 09:36:13.314727  207481 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 09:36:13.315580  207481 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 09:36:13.318309  207481 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1025 09:36:13.350988  203993 node_ready.go:57] node "default-k8s-diff-port-666079" has "Ready":"False" status (will retry)
	W1025 09:36:15.351055  203993 node_ready.go:57] node "default-k8s-diff-port-666079" has "Ready":"False" status (will retry)
	I1025 09:36:13.321864  207481 out.go:252]   - Booting up control plane ...
	I1025 09:36:13.322006  207481 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 09:36:13.322089  207481 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 09:36:13.322174  207481 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 09:36:13.342531  207481 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 09:36:13.342869  207481 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1025 09:36:13.354422  207481 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1025 09:36:13.354543  207481 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 09:36:13.354593  207481 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1025 09:36:13.494145  207481 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1025 09:36:13.494298  207481 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1025 09:36:14.490344  207481 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001726286s
	I1025 09:36:14.493975  207481 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1025 09:36:14.494090  207481 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1025 09:36:14.494221  207481 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1025 09:36:14.494309  207481 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1025 09:36:16.346716  207481 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.851637137s
	I1025 09:36:18.517067  207481 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.023024132s
	I1025 09:36:20.495484  207481 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.001414424s
	I1025 09:36:20.516062  207481 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 09:36:20.534010  207481 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 09:36:20.555186  207481 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 09:36:20.555421  207481 kubeadm.go:318] [mark-control-plane] Marking the node newest-cni-052144 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 09:36:20.570439  207481 kubeadm.go:318] [bootstrap-token] Using token: sclznf.2scjaaw40t7fuz90
	W1025 09:36:17.850036  203993 node_ready.go:57] node "default-k8s-diff-port-666079" has "Ready":"False" status (will retry)
	W1025 09:36:19.850820  203993 node_ready.go:57] node "default-k8s-diff-port-666079" has "Ready":"False" status (will retry)
	I1025 09:36:20.573594  207481 out.go:252]   - Configuring RBAC rules ...
	I1025 09:36:20.573728  207481 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 09:36:20.582081  207481 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 09:36:20.596371  207481 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 09:36:20.600923  207481 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 09:36:20.606092  207481 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 09:36:20.611799  207481 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 09:36:20.903812  207481 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 09:36:21.391088  207481 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1025 09:36:21.906391  207481 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1025 09:36:21.907463  207481 kubeadm.go:318] 
	I1025 09:36:21.907540  207481 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1025 09:36:21.907558  207481 kubeadm.go:318] 
	I1025 09:36:21.907644  207481 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1025 09:36:21.907654  207481 kubeadm.go:318] 
	I1025 09:36:21.907680  207481 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1025 09:36:21.907743  207481 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 09:36:21.907796  207481 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 09:36:21.907805  207481 kubeadm.go:318] 
	I1025 09:36:21.907859  207481 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1025 09:36:21.907866  207481 kubeadm.go:318] 
	I1025 09:36:21.907914  207481 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 09:36:21.907922  207481 kubeadm.go:318] 
	I1025 09:36:21.907973  207481 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1025 09:36:21.908051  207481 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 09:36:21.908122  207481 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 09:36:21.908130  207481 kubeadm.go:318] 
	I1025 09:36:21.908213  207481 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 09:36:21.908293  207481 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1025 09:36:21.908302  207481 kubeadm.go:318] 
	I1025 09:36:21.908384  207481 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token sclznf.2scjaaw40t7fuz90 \
	I1025 09:36:21.908489  207481 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:e54eda66e4b00d0a990c315bf332782d8922fe6750f8b2bc2791b23f9457095b \
	I1025 09:36:21.908513  207481 kubeadm.go:318] 	--control-plane 
	I1025 09:36:21.908524  207481 kubeadm.go:318] 
	I1025 09:36:21.908609  207481 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1025 09:36:21.908617  207481 kubeadm.go:318] 
	I1025 09:36:21.908698  207481 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token sclznf.2scjaaw40t7fuz90 \
	I1025 09:36:21.908803  207481 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:e54eda66e4b00d0a990c315bf332782d8922fe6750f8b2bc2791b23f9457095b 
	I1025 09:36:21.913400  207481 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1025 09:36:21.913640  207481 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1025 09:36:21.913755  207481 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 09:36:21.913771  207481 cni.go:84] Creating CNI manager for ""
	I1025 09:36:21.913779  207481 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:36:21.917065  207481 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1025 09:36:21.920010  207481 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 09:36:21.924919  207481 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1025 09:36:21.924991  207481 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1025 09:36:21.938843  207481 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 09:36:22.307964  207481 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 09:36:22.308159  207481 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:36:22.308254  207481 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-052144 minikube.k8s.io/updated_at=2025_10_25T09_36_22_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c3de5ce1075e776fa55a26fe8396669cc53a4373 minikube.k8s.io/name=newest-cni-052144 minikube.k8s.io/primary=true
	I1025 09:36:22.581196  207481 ops.go:34] apiserver oom_adj: -16
	I1025 09:36:22.581304  207481 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:36:23.082152  207481 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:36:23.581374  207481 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:36:24.082118  207481 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:36:24.581432  207481 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:36:25.081858  207481 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:36:25.582199  207481 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:36:25.680276  207481 kubeadm.go:1113] duration metric: took 3.372180955s to wait for elevateKubeSystemPrivileges
	I1025 09:36:25.680309  207481 kubeadm.go:402] duration metric: took 21.372046637s to StartCluster
	I1025 09:36:25.680327  207481 settings.go:142] acquiring lock: {Name:mk0d8a31751f5e359928da7d7271367bd4b397fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:36:25.680389  207481 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21796-2312/kubeconfig
	I1025 09:36:25.681353  207481 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/kubeconfig: {Name:mk29749a0edbb941c188588ea6aef25a517ab571 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:36:25.681580  207481 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:36:25.681710  207481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 09:36:25.681970  207481 config.go:182] Loaded profile config "newest-cni-052144": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:36:25.682041  207481 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 09:36:25.682104  207481 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-052144"
	I1025 09:36:25.682121  207481 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-052144"
	I1025 09:36:25.682147  207481 host.go:66] Checking if "newest-cni-052144" exists ...
	I1025 09:36:25.682841  207481 cli_runner.go:164] Run: docker container inspect newest-cni-052144 --format={{.State.Status}}
	I1025 09:36:25.683412  207481 addons.go:69] Setting default-storageclass=true in profile "newest-cni-052144"
	I1025 09:36:25.683432  207481 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-052144"
	I1025 09:36:25.683707  207481 cli_runner.go:164] Run: docker container inspect newest-cni-052144 --format={{.State.Status}}
	I1025 09:36:25.686056  207481 out.go:179] * Verifying Kubernetes components...
	I1025 09:36:25.689142  207481 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:36:25.716820  207481 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1025 09:36:22.349951  203993 node_ready.go:57] node "default-k8s-diff-port-666079" has "Ready":"False" status (will retry)
	W1025 09:36:24.350527  203993 node_ready.go:57] node "default-k8s-diff-port-666079" has "Ready":"False" status (will retry)
	W1025 09:36:26.350995  203993 node_ready.go:57] node "default-k8s-diff-port-666079" has "Ready":"False" status (will retry)
	I1025 09:36:25.719769  207481 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:36:25.719792  207481 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 09:36:25.719861  207481 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-052144
	I1025 09:36:25.730156  207481 addons.go:238] Setting addon default-storageclass=true in "newest-cni-052144"
	I1025 09:36:25.730192  207481 host.go:66] Checking if "newest-cni-052144" exists ...
	I1025 09:36:25.730606  207481 cli_runner.go:164] Run: docker container inspect newest-cni-052144 --format={{.State.Status}}
	I1025 09:36:25.767897  207481 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/newest-cni-052144/id_rsa Username:docker}
	I1025 09:36:25.787532  207481 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 09:36:25.787553  207481 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 09:36:25.787617  207481 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-052144
	I1025 09:36:25.820495  207481 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/newest-cni-052144/id_rsa Username:docker}
	I1025 09:36:25.908267  207481 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 09:36:25.954892  207481 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:36:26.087089  207481 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:36:26.096739  207481 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 09:36:26.406110  207481 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1025 09:36:26.407098  207481 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:36:26.408873  207481 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:36:26.875842  207481 api_server.go:72] duration metric: took 1.194227261s to wait for apiserver process to appear ...
	I1025 09:36:26.875920  207481 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:36:26.875950  207481 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:36:26.890265  207481 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1025 09:36:26.891349  207481 api_server.go:141] control plane version: v1.34.1
	I1025 09:36:26.891413  207481 api_server.go:131] duration metric: took 15.473016ms to wait for apiserver health ...
	I1025 09:36:26.891436  207481 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 09:36:26.909739  207481 system_pods.go:59] 8 kube-system pods found
	I1025 09:36:26.909770  207481 system_pods.go:61] "coredns-66bc5c9577-whxdx" [3df2d221-4d0f-4389-b1b1-78c0c980eb77] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1025 09:36:26.909781  207481 system_pods.go:61] "etcd-newest-cni-052144" [a5f918ce-23e0-463a-a637-4ecad2be6163] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 09:36:26.909791  207481 system_pods.go:61] "kindnet-c9wzk" [cf7b8b45-3b46-4a97-8c27-2eca0f408738] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1025 09:36:26.909798  207481 system_pods.go:61] "kube-apiserver-newest-cni-052144" [c0c3a020-1407-4e0e-9378-3a7d5f49fcd4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 09:36:26.909807  207481 system_pods.go:61] "kube-controller-manager-newest-cni-052144" [3933334f-83c3-43fa-a233-f4931bd7224a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 09:36:26.909813  207481 system_pods.go:61] "kube-proxy-wh72x" [e3f00316-8d1f-4dd3-ad3b-7b973e951dc3] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1025 09:36:26.909818  207481 system_pods.go:61] "kube-scheduler-newest-cni-052144" [ccba5caf-481b-4b6b-88d0-71e8581766dc] Running
	I1025 09:36:26.909824  207481 system_pods.go:61] "storage-provisioner" [c71a8288-c49a-4cf3-a34b-e5b06c1509ac] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1025 09:36:26.909829  207481 system_pods.go:74] duration metric: took 18.374046ms to wait for pod list to return data ...
	I1025 09:36:26.909837  207481 default_sa.go:34] waiting for default service account to be created ...
	I1025 09:36:26.910844  207481 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1025 09:36:26.913387  207481 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-052144" context rescaled to 1 replicas
	I1025 09:36:26.914917  207481 addons.go:514] duration metric: took 1.232861129s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1025 09:36:26.915584  207481 default_sa.go:45] found service account: "default"
	I1025 09:36:26.915601  207481 default_sa.go:55] duration metric: took 5.758894ms for default service account to be created ...
	I1025 09:36:26.915612  207481 kubeadm.go:586] duration metric: took 1.234001187s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1025 09:36:26.915627  207481 node_conditions.go:102] verifying NodePressure condition ...
	I1025 09:36:26.923448  207481 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 09:36:26.923531  207481 node_conditions.go:123] node cpu capacity is 2
	I1025 09:36:26.923559  207481 node_conditions.go:105] duration metric: took 7.925015ms to run NodePressure ...
	I1025 09:36:26.923599  207481 start.go:241] waiting for startup goroutines ...
	I1025 09:36:26.923624  207481 start.go:246] waiting for cluster config update ...
	I1025 09:36:26.923652  207481 start.go:255] writing updated cluster config ...
	I1025 09:36:26.923918  207481 ssh_runner.go:195] Run: rm -f paused
	I1025 09:36:27.010190  207481 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1025 09:36:27.013244  207481 out.go:179] * Done! kubectl is now configured to use "newest-cni-052144" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 25 09:36:26 newest-cni-052144 crio[839]: time="2025-10-25T09:36:26.885613118Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:36:26 newest-cni-052144 crio[839]: time="2025-10-25T09:36:26.89273416Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=14409493-3b3c-4bf9-90b5-b857a9bbf527 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:36:26 newest-cni-052144 crio[839]: time="2025-10-25T09:36:26.893569485Z" level=info msg="Running pod sandbox: kube-system/kindnet-c9wzk/POD" id=4a096c95-f44c-4d25-b9c6-367b225acba8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:36:26 newest-cni-052144 crio[839]: time="2025-10-25T09:36:26.893618289Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:36:26 newest-cni-052144 crio[839]: time="2025-10-25T09:36:26.898101689Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=4a096c95-f44c-4d25-b9c6-367b225acba8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:36:26 newest-cni-052144 crio[839]: time="2025-10-25T09:36:26.900976184Z" level=info msg="Ran pod sandbox dfdf4508dec1304a7e9536c41f6f76c4e4dde83387c92fd457e2ca805887f897 with infra container: kube-system/kube-proxy-wh72x/POD" id=14409493-3b3c-4bf9-90b5-b857a9bbf527 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:36:26 newest-cni-052144 crio[839]: time="2025-10-25T09:36:26.904257026Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=10ada9eb-dcab-48eb-86d1-a9a3eb1677cf name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:36:26 newest-cni-052144 crio[839]: time="2025-10-25T09:36:26.906839981Z" level=info msg="Ran pod sandbox db24cab03d51683f151653025d77894ea6e379f6f2eb76698e97a82700171e08 with infra container: kube-system/kindnet-c9wzk/POD" id=4a096c95-f44c-4d25-b9c6-367b225acba8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:36:26 newest-cni-052144 crio[839]: time="2025-10-25T09:36:26.913850754Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=f9b73abc-690a-42df-a6ef-c83b9b6c4825 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:36:26 newest-cni-052144 crio[839]: time="2025-10-25T09:36:26.916973883Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=5bafde14-96df-4ec4-a827-96d7ace2716d name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:36:26 newest-cni-052144 crio[839]: time="2025-10-25T09:36:26.920659473Z" level=info msg="Creating container: kube-system/kube-proxy-wh72x/kube-proxy" id=ab564ab6-6e43-45cd-8276-bfd517298095 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:36:26 newest-cni-052144 crio[839]: time="2025-10-25T09:36:26.920962828Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:36:26 newest-cni-052144 crio[839]: time="2025-10-25T09:36:26.922891252Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=2ecb104e-0801-4da1-a6b0-51402f54e26b name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:36:26 newest-cni-052144 crio[839]: time="2025-10-25T09:36:26.935741108Z" level=info msg="Creating container: kube-system/kindnet-c9wzk/kindnet-cni" id=dbe86191-427e-4150-b657-807b005e2371 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:36:26 newest-cni-052144 crio[839]: time="2025-10-25T09:36:26.936882044Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:36:26 newest-cni-052144 crio[839]: time="2025-10-25T09:36:26.936262771Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:36:26 newest-cni-052144 crio[839]: time="2025-10-25T09:36:26.939556241Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:36:26 newest-cni-052144 crio[839]: time="2025-10-25T09:36:26.94567308Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:36:26 newest-cni-052144 crio[839]: time="2025-10-25T09:36:26.947020188Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:36:27 newest-cni-052144 crio[839]: time="2025-10-25T09:36:26.998623322Z" level=info msg="Created container 92f4ca068245c3ab5c750e3e1452e88ab399864c3d276a585e7f65ab02bae1b6: kube-system/kindnet-c9wzk/kindnet-cni" id=dbe86191-427e-4150-b657-807b005e2371 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:36:27 newest-cni-052144 crio[839]: time="2025-10-25T09:36:27.006431076Z" level=info msg="Starting container: 92f4ca068245c3ab5c750e3e1452e88ab399864c3d276a585e7f65ab02bae1b6" id=3fa7768b-16a5-4cc1-8348-ef388e086a80 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:36:27 newest-cni-052144 crio[839]: time="2025-10-25T09:36:27.008646396Z" level=info msg="Created container d3a176b9c941ed95fb00fb102e38a261effb9d65b3bcd59a066a574fc0f0b6f7: kube-system/kube-proxy-wh72x/kube-proxy" id=ab564ab6-6e43-45cd-8276-bfd517298095 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:36:27 newest-cni-052144 crio[839]: time="2025-10-25T09:36:27.009464465Z" level=info msg="Starting container: d3a176b9c941ed95fb00fb102e38a261effb9d65b3bcd59a066a574fc0f0b6f7" id=8d91c62d-3101-48e4-80c5-de1cedcae232 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:36:27 newest-cni-052144 crio[839]: time="2025-10-25T09:36:27.015197069Z" level=info msg="Started container" PID=1490 containerID=92f4ca068245c3ab5c750e3e1452e88ab399864c3d276a585e7f65ab02bae1b6 description=kube-system/kindnet-c9wzk/kindnet-cni id=3fa7768b-16a5-4cc1-8348-ef388e086a80 name=/runtime.v1.RuntimeService/StartContainer sandboxID=db24cab03d51683f151653025d77894ea6e379f6f2eb76698e97a82700171e08
	Oct 25 09:36:27 newest-cni-052144 crio[839]: time="2025-10-25T09:36:27.058679854Z" level=info msg="Started container" PID=1486 containerID=d3a176b9c941ed95fb00fb102e38a261effb9d65b3bcd59a066a574fc0f0b6f7 description=kube-system/kube-proxy-wh72x/kube-proxy id=8d91c62d-3101-48e4-80c5-de1cedcae232 name=/runtime.v1.RuntimeService/StartContainer sandboxID=dfdf4508dec1304a7e9536c41f6f76c4e4dde83387c92fd457e2ca805887f897
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	92f4ca068245c       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   1 second ago        Running             kindnet-cni               0                   db24cab03d516       kindnet-c9wzk                               kube-system
	d3a176b9c941e       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   1 second ago        Running             kube-proxy                0                   dfdf4508dec13       kube-proxy-wh72x                            kube-system
	6c8c83d5da369       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   13 seconds ago      Running             kube-scheduler            0                   d6d2c0a005de3       kube-scheduler-newest-cni-052144            kube-system
	20c83e38e2a75       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   13 seconds ago      Running             kube-apiserver            0                   1386f010a8b24       kube-apiserver-newest-cni-052144            kube-system
	76ac6afbe2c8b       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   13 seconds ago      Running             kube-controller-manager   0                   29075bb19f68c       kube-controller-manager-newest-cni-052144   kube-system
	1b45a1c67c67c       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   13 seconds ago      Running             etcd                      0                   d5590af8b1bab       etcd-newest-cni-052144                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-052144
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-052144
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3de5ce1075e776fa55a26fe8396669cc53a4373
	                    minikube.k8s.io/name=newest-cni-052144
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_36_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:36:18 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-052144
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:36:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:36:21 +0000   Sat, 25 Oct 2025 09:36:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:36:21 +0000   Sat, 25 Oct 2025 09:36:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:36:21 +0000   Sat, 25 Oct 2025 09:36:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 25 Oct 2025 09:36:21 +0000   Sat, 25 Oct 2025 09:36:14 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-052144
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                520ddce7-f680-41ee-9a5a-5efd431b826c
	  Boot ID:                    89aa6167-e700-462e-8969-7739a9f33dc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-052144                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         7s
	  kube-system                 kindnet-c9wzk                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2s
	  kube-system                 kube-apiserver-newest-cni-052144             250m (12%)    0 (0%)      0 (0%)           0 (0%)         7s
	  kube-system                 kube-controller-manager-newest-cni-052144    200m (10%)    0 (0%)      0 (0%)           0 (0%)         7s
	  kube-system                 kube-proxy-wh72x                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kube-system                 kube-scheduler-newest-cni-052144             100m (5%)     0 (0%)      0 (0%)           0 (0%)         7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 1s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  14s (x8 over 14s)  kubelet          Node newest-cni-052144 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14s (x8 over 14s)  kubelet          Node newest-cni-052144 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14s (x8 over 14s)  kubelet          Node newest-cni-052144 status is now: NodeHasSufficientPID
	  Normal   Starting                 7s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 7s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  7s                 kubelet          Node newest-cni-052144 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7s                 kubelet          Node newest-cni-052144 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7s                 kubelet          Node newest-cni-052144 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3s                 node-controller  Node newest-cni-052144 event: Registered Node newest-cni-052144 in Controller
	
	
	==> dmesg <==
	[ +18.632418] overlayfs: idmapped layers are currently not supported
	[ +13.271347] overlayfs: idmapped layers are currently not supported
	[Oct25 09:14] overlayfs: idmapped layers are currently not supported
	[Oct25 09:15] overlayfs: idmapped layers are currently not supported
	[ +40.253798] overlayfs: idmapped layers are currently not supported
	[Oct25 09:16] overlayfs: idmapped layers are currently not supported
	[Oct25 09:17] overlayfs: idmapped layers are currently not supported
	[Oct25 09:18] overlayfs: idmapped layers are currently not supported
	[  +9.191019] hrtimer: interrupt took 13462946 ns
	[Oct25 09:20] overlayfs: idmapped layers are currently not supported
	[Oct25 09:22] overlayfs: idmapped layers are currently not supported
	[Oct25 09:23] overlayfs: idmapped layers are currently not supported
	[Oct25 09:24] overlayfs: idmapped layers are currently not supported
	[Oct25 09:28] overlayfs: idmapped layers are currently not supported
	[ +37.283444] overlayfs: idmapped layers are currently not supported
	[Oct25 09:29] overlayfs: idmapped layers are currently not supported
	[ +38.328802] overlayfs: idmapped layers are currently not supported
	[Oct25 09:30] overlayfs: idmapped layers are currently not supported
	[Oct25 09:31] overlayfs: idmapped layers are currently not supported
	[Oct25 09:32] overlayfs: idmapped layers are currently not supported
	[Oct25 09:33] overlayfs: idmapped layers are currently not supported
	[Oct25 09:34] overlayfs: idmapped layers are currently not supported
	[ +28.289706] overlayfs: idmapped layers are currently not supported
	[Oct25 09:35] overlayfs: idmapped layers are currently not supported
	[Oct25 09:36] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [1b45a1c67c67c14df70f9331d2494fdec96f694085fee89cac6def40816288ef] <==
	{"level":"warn","ts":"2025-10-25T09:36:17.165956Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:17.191237Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:17.204134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:17.233307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:17.238505Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:17.256917Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:17.273376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:17.290674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:17.306945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:17.326563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:17.346255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:17.367757Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:17.410932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:17.412194Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:17.425526Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:17.450761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:17.466847Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:17.484103Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:17.501156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:17.518830Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:17.536743Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:17.566673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:17.584923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:17.606992Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:17.700821Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52888","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:36:28 up  1:18,  0 user,  load average: 3.88, 3.83, 3.04
	Linux newest-cni-052144 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [92f4ca068245c3ab5c750e3e1452e88ab399864c3d276a585e7f65ab02bae1b6] <==
	I1025 09:36:27.123614       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 09:36:27.125007       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1025 09:36:27.125197       1 main.go:148] setting mtu 1500 for CNI 
	I1025 09:36:27.125239       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 09:36:27.125272       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T09:36:27Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 09:36:27.323674       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 09:36:27.323697       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 09:36:27.323706       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 09:36:27.324372       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [20c83e38e2a757f8fcddaaacd6337b38c7b71145fdab431cb1ec6b0682903f5a] <==
	I1025 09:36:18.586208       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1025 09:36:18.586305       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1025 09:36:18.598936       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 09:36:18.613741       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 09:36:18.661158       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:36:18.661611       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1025 09:36:18.670262       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:36:18.670449       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1025 09:36:19.296938       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1025 09:36:19.302916       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1025 09:36:19.302941       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 09:36:20.148193       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 09:36:20.204031       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 09:36:20.295897       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1025 09:36:20.305286       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1025 09:36:20.306555       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 09:36:20.313118       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 09:36:20.468229       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 09:36:21.319400       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 09:36:21.388705       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1025 09:36:21.401880       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1025 09:36:26.180568       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 09:36:26.336394       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:36:26.346620       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:36:26.528522       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [76ac6afbe2c8b0fe9a3afbec3ccb9e592ffb05ce68b3dc30af49966fa31ec366] <==
	I1025 09:36:25.482075       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1025 09:36:25.482183       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1025 09:36:25.482221       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1025 09:36:25.482254       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1025 09:36:25.482289       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1025 09:36:25.482491       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:36:25.489588       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1025 09:36:25.489765       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1025 09:36:25.497091       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="newest-cni-052144" podCIDRs=["10.42.0.0/24"]
	I1025 09:36:25.498311       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:36:25.499472       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1025 09:36:25.509058       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1025 09:36:25.515797       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 09:36:25.515897       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1025 09:36:25.515962       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1025 09:36:25.516003       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1025 09:36:25.516347       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1025 09:36:25.518400       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1025 09:36:25.518485       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1025 09:36:25.519640       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1025 09:36:25.519724       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1025 09:36:25.519764       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 09:36:25.519905       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1025 09:36:25.520037       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-052144"
	I1025 09:36:25.520086       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [d3a176b9c941ed95fb00fb102e38a261effb9d65b3bcd59a066a574fc0f0b6f7] <==
	I1025 09:36:27.150078       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:36:27.247831       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:36:27.349843       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:36:27.349878       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1025 09:36:27.349946       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:36:27.392082       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:36:27.392216       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:36:27.396891       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:36:27.397296       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:36:27.397486       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:36:27.398852       1 config.go:200] "Starting service config controller"
	I1025 09:36:27.398912       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:36:27.398954       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:36:27.398980       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:36:27.399014       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:36:27.399039       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:36:27.399761       1 config.go:309] "Starting node config controller"
	I1025 09:36:27.403550       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:36:27.403637       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:36:27.500034       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 09:36:27.510301       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 09:36:27.510347       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [6c8c83d5da369011bc5d2ca5bd5e707effcc92fdb093643886c33bcf83ce371e] <==
	E1025 09:36:18.529125       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 09:36:18.529158       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 09:36:18.529191       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 09:36:18.529223       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 09:36:18.529391       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1025 09:36:18.529439       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 09:36:18.529481       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1025 09:36:18.529596       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 09:36:18.529653       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 09:36:18.529686       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 09:36:18.553879       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1025 09:36:19.425129       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 09:36:19.445825       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 09:36:19.487691       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 09:36:19.488144       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 09:36:19.498154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 09:36:19.538734       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 09:36:19.564361       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 09:36:19.575261       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1025 09:36:19.586311       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 09:36:19.663214       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 09:36:19.666487       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1025 09:36:19.790555       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 09:36:19.796669       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1025 09:36:22.794116       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 09:36:21 newest-cni-052144 kubelet[1309]: I1025 09:36:21.686482    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c331f697174e2dda918d2ff3dfb09951-usr-local-share-ca-certificates\") pod \"kube-apiserver-newest-cni-052144\" (UID: \"c331f697174e2dda918d2ff3dfb09951\") " pod="kube-system/kube-apiserver-newest-cni-052144"
	Oct 25 09:36:21 newest-cni-052144 kubelet[1309]: I1025 09:36:21.686499    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c331f697174e2dda918d2ff3dfb09951-usr-share-ca-certificates\") pod \"kube-apiserver-newest-cni-052144\" (UID: \"c331f697174e2dda918d2ff3dfb09951\") " pod="kube-system/kube-apiserver-newest-cni-052144"
	Oct 25 09:36:21 newest-cni-052144 kubelet[1309]: I1025 09:36:21.686517    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a0ae544ebdbbe122d12066904a2bb84e-ca-certs\") pod \"kube-controller-manager-newest-cni-052144\" (UID: \"a0ae544ebdbbe122d12066904a2bb84e\") " pod="kube-system/kube-controller-manager-newest-cni-052144"
	Oct 25 09:36:22 newest-cni-052144 kubelet[1309]: I1025 09:36:22.271951    1309 apiserver.go:52] "Watching apiserver"
	Oct 25 09:36:22 newest-cni-052144 kubelet[1309]: I1025 09:36:22.284653    1309 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 25 09:36:22 newest-cni-052144 kubelet[1309]: I1025 09:36:22.429242    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-052144" podStartSLOduration=1.4292215160000001 podStartE2EDuration="1.429221516s" podCreationTimestamp="2025-10-25 09:36:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:36:22.390551364 +0000 UTC m=+1.205073912" watchObservedRunningTime="2025-10-25 09:36:22.429221516 +0000 UTC m=+1.243743965"
	Oct 25 09:36:22 newest-cni-052144 kubelet[1309]: I1025 09:36:22.443769    1309 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-052144"
	Oct 25 09:36:22 newest-cni-052144 kubelet[1309]: I1025 09:36:22.444386    1309 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-052144"
	Oct 25 09:36:22 newest-cni-052144 kubelet[1309]: I1025 09:36:22.449024    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-052144" podStartSLOduration=1.448981335 podStartE2EDuration="1.448981335s" podCreationTimestamp="2025-10-25 09:36:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:36:22.448475606 +0000 UTC m=+1.262998056" watchObservedRunningTime="2025-10-25 09:36:22.448981335 +0000 UTC m=+1.263503785"
	Oct 25 09:36:22 newest-cni-052144 kubelet[1309]: I1025 09:36:22.449272    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-052144" podStartSLOduration=1.449239003 podStartE2EDuration="1.449239003s" podCreationTimestamp="2025-10-25 09:36:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:36:22.429440201 +0000 UTC m=+1.243962684" watchObservedRunningTime="2025-10-25 09:36:22.449239003 +0000 UTC m=+1.263761461"
	Oct 25 09:36:22 newest-cni-052144 kubelet[1309]: E1025 09:36:22.467066    1309 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-052144\" already exists" pod="kube-system/kube-apiserver-newest-cni-052144"
	Oct 25 09:36:22 newest-cni-052144 kubelet[1309]: E1025 09:36:22.472369    1309 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-052144\" already exists" pod="kube-system/kube-scheduler-newest-cni-052144"
	Oct 25 09:36:22 newest-cni-052144 kubelet[1309]: I1025 09:36:22.473920    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-052144" podStartSLOduration=1.47390384 podStartE2EDuration="1.47390384s" podCreationTimestamp="2025-10-25 09:36:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:36:22.47352731 +0000 UTC m=+1.288049817" watchObservedRunningTime="2025-10-25 09:36:22.47390384 +0000 UTC m=+1.288426298"
	Oct 25 09:36:25 newest-cni-052144 kubelet[1309]: I1025 09:36:25.524261    1309 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 25 09:36:25 newest-cni-052144 kubelet[1309]: I1025 09:36:25.525531    1309 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 25 09:36:26 newest-cni-052144 kubelet[1309]: I1025 09:36:26.642657    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e3f00316-8d1f-4dd3-ad3b-7b973e951dc3-kube-proxy\") pod \"kube-proxy-wh72x\" (UID: \"e3f00316-8d1f-4dd3-ad3b-7b973e951dc3\") " pod="kube-system/kube-proxy-wh72x"
	Oct 25 09:36:26 newest-cni-052144 kubelet[1309]: I1025 09:36:26.642698    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65x6w\" (UniqueName: \"kubernetes.io/projected/e3f00316-8d1f-4dd3-ad3b-7b973e951dc3-kube-api-access-65x6w\") pod \"kube-proxy-wh72x\" (UID: \"e3f00316-8d1f-4dd3-ad3b-7b973e951dc3\") " pod="kube-system/kube-proxy-wh72x"
	Oct 25 09:36:26 newest-cni-052144 kubelet[1309]: I1025 09:36:26.642719    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cf7b8b45-3b46-4a97-8c27-2eca0f408738-lib-modules\") pod \"kindnet-c9wzk\" (UID: \"cf7b8b45-3b46-4a97-8c27-2eca0f408738\") " pod="kube-system/kindnet-c9wzk"
	Oct 25 09:36:26 newest-cni-052144 kubelet[1309]: I1025 09:36:26.642737    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rnfz\" (UniqueName: \"kubernetes.io/projected/cf7b8b45-3b46-4a97-8c27-2eca0f408738-kube-api-access-5rnfz\") pod \"kindnet-c9wzk\" (UID: \"cf7b8b45-3b46-4a97-8c27-2eca0f408738\") " pod="kube-system/kindnet-c9wzk"
	Oct 25 09:36:26 newest-cni-052144 kubelet[1309]: I1025 09:36:26.642757    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e3f00316-8d1f-4dd3-ad3b-7b973e951dc3-xtables-lock\") pod \"kube-proxy-wh72x\" (UID: \"e3f00316-8d1f-4dd3-ad3b-7b973e951dc3\") " pod="kube-system/kube-proxy-wh72x"
	Oct 25 09:36:26 newest-cni-052144 kubelet[1309]: I1025 09:36:26.642773    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/cf7b8b45-3b46-4a97-8c27-2eca0f408738-cni-cfg\") pod \"kindnet-c9wzk\" (UID: \"cf7b8b45-3b46-4a97-8c27-2eca0f408738\") " pod="kube-system/kindnet-c9wzk"
	Oct 25 09:36:26 newest-cni-052144 kubelet[1309]: I1025 09:36:26.642794    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e3f00316-8d1f-4dd3-ad3b-7b973e951dc3-lib-modules\") pod \"kube-proxy-wh72x\" (UID: \"e3f00316-8d1f-4dd3-ad3b-7b973e951dc3\") " pod="kube-system/kube-proxy-wh72x"
	Oct 25 09:36:26 newest-cni-052144 kubelet[1309]: I1025 09:36:26.642812    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cf7b8b45-3b46-4a97-8c27-2eca0f408738-xtables-lock\") pod \"kindnet-c9wzk\" (UID: \"cf7b8b45-3b46-4a97-8c27-2eca0f408738\") " pod="kube-system/kindnet-c9wzk"
	Oct 25 09:36:26 newest-cni-052144 kubelet[1309]: I1025 09:36:26.773844    1309 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 25 09:36:27 newest-cni-052144 kubelet[1309]: I1025 09:36:27.517214    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-c9wzk" podStartSLOduration=1.517192941 podStartE2EDuration="1.517192941s" podCreationTimestamp="2025-10-25 09:36:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:36:27.486398606 +0000 UTC m=+6.300921056" watchObservedRunningTime="2025-10-25 09:36:27.517192941 +0000 UTC m=+6.331715390"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-052144 -n newest-cni-052144
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-052144 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-whxdx storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-052144 describe pod coredns-66bc5c9577-whxdx storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-052144 describe pod coredns-66bc5c9577-whxdx storage-provisioner: exit status 1 (94.201033ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-whxdx" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-052144 describe pod coredns-66bc5c9577-whxdx storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (7.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-052144 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p newest-cni-052144 --alsologtostderr -v=1: exit status 80 (1.954196391s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-052144 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:36:47.202708  212837 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:36:47.202897  212837 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:36:47.202910  212837 out.go:374] Setting ErrFile to fd 2...
	I1025 09:36:47.202915  212837 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:36:47.203209  212837 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-2312/.minikube/bin
	I1025 09:36:47.203453  212837 out.go:368] Setting JSON to false
	I1025 09:36:47.203475  212837 mustload.go:65] Loading cluster: newest-cni-052144
	I1025 09:36:47.203858  212837 config.go:182] Loaded profile config "newest-cni-052144": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:36:47.204320  212837 cli_runner.go:164] Run: docker container inspect newest-cni-052144 --format={{.State.Status}}
	I1025 09:36:47.232314  212837 host.go:66] Checking if "newest-cni-052144" exists ...
	I1025 09:36:47.232723  212837 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:36:47.300213  212837 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-25 09:36:47.28800864 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:36:47.300880  212837 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-052144 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1025 09:36:47.304696  212837 out.go:179] * Pausing node newest-cni-052144 ... 
	I1025 09:36:47.307928  212837 host.go:66] Checking if "newest-cni-052144" exists ...
	I1025 09:36:47.308464  212837 ssh_runner.go:195] Run: systemctl --version
	I1025 09:36:47.308516  212837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-052144
	I1025 09:36:47.327212  212837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/newest-cni-052144/id_rsa Username:docker}
	I1025 09:36:47.433805  212837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:36:47.461565  212837 pause.go:52] kubelet running: true
	I1025 09:36:47.461640  212837 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 09:36:47.680629  212837 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 09:36:47.680721  212837 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 09:36:47.788408  212837 cri.go:89] found id: "10b980ab399dae84d11c3f4759ec594c676a369d118101e64cee4812c3180c5a"
	I1025 09:36:47.788431  212837 cri.go:89] found id: "1be8c9f0f660b7b829177a15ea343cd85eab19806d6a017c49f04c41ee9c815f"
	I1025 09:36:47.788437  212837 cri.go:89] found id: "d7070563a28b2ab73806b945b9883656a057f27f61629f911a4ed809c987d519"
	I1025 09:36:47.788449  212837 cri.go:89] found id: "22b38a011d6083a5d52b1656049438d0d0df32d5b7a4981c40343c7ca6b279c4"
	I1025 09:36:47.788453  212837 cri.go:89] found id: "5ae99446ae6aedbea3baa1c22e2f1ff0346551a5136113c7579f0e09d070e253"
	I1025 09:36:47.788457  212837 cri.go:89] found id: "1debff741ebda89c6f5555bf50231cbd526f0d6d17047a2dfd254dad44fe064c"
	I1025 09:36:47.788461  212837 cri.go:89] found id: ""
	I1025 09:36:47.788511  212837 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:36:47.801489  212837 retry.go:31] will retry after 274.958803ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:36:47Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:36:48.077025  212837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:36:48.091443  212837 pause.go:52] kubelet running: false
	I1025 09:36:48.091512  212837 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 09:36:48.259171  212837 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 09:36:48.259250  212837 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 09:36:48.340357  212837 cri.go:89] found id: "10b980ab399dae84d11c3f4759ec594c676a369d118101e64cee4812c3180c5a"
	I1025 09:36:48.340377  212837 cri.go:89] found id: "1be8c9f0f660b7b829177a15ea343cd85eab19806d6a017c49f04c41ee9c815f"
	I1025 09:36:48.340382  212837 cri.go:89] found id: "d7070563a28b2ab73806b945b9883656a057f27f61629f911a4ed809c987d519"
	I1025 09:36:48.340385  212837 cri.go:89] found id: "22b38a011d6083a5d52b1656049438d0d0df32d5b7a4981c40343c7ca6b279c4"
	I1025 09:36:48.340388  212837 cri.go:89] found id: "5ae99446ae6aedbea3baa1c22e2f1ff0346551a5136113c7579f0e09d070e253"
	I1025 09:36:48.340392  212837 cri.go:89] found id: "1debff741ebda89c6f5555bf50231cbd526f0d6d17047a2dfd254dad44fe064c"
	I1025 09:36:48.340396  212837 cri.go:89] found id: ""
	I1025 09:36:48.340442  212837 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:36:48.362322  212837 retry.go:31] will retry after 456.425652ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:36:48Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:36:48.819924  212837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:36:48.833366  212837 pause.go:52] kubelet running: false
	I1025 09:36:48.833455  212837 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 09:36:48.979783  212837 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 09:36:48.979861  212837 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 09:36:49.065193  212837 cri.go:89] found id: "10b980ab399dae84d11c3f4759ec594c676a369d118101e64cee4812c3180c5a"
	I1025 09:36:49.065218  212837 cri.go:89] found id: "1be8c9f0f660b7b829177a15ea343cd85eab19806d6a017c49f04c41ee9c815f"
	I1025 09:36:49.065223  212837 cri.go:89] found id: "d7070563a28b2ab73806b945b9883656a057f27f61629f911a4ed809c987d519"
	I1025 09:36:49.065227  212837 cri.go:89] found id: "22b38a011d6083a5d52b1656049438d0d0df32d5b7a4981c40343c7ca6b279c4"
	I1025 09:36:49.065230  212837 cri.go:89] found id: "5ae99446ae6aedbea3baa1c22e2f1ff0346551a5136113c7579f0e09d070e253"
	I1025 09:36:49.065236  212837 cri.go:89] found id: "1debff741ebda89c6f5555bf50231cbd526f0d6d17047a2dfd254dad44fe064c"
	I1025 09:36:49.065239  212837 cri.go:89] found id: ""
	I1025 09:36:49.065302  212837 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:36:49.080019  212837 out.go:203] 
	W1025 09:36:49.082873  212837 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:36:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:36:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:36:49.082962  212837 out.go:285] * 
	* 
	W1025 09:36:49.088237  212837 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:36:49.092071  212837 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p newest-cni-052144 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-052144
helpers_test.go:243: (dbg) docker inspect newest-cni-052144:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e1443cadde6d14168556fbd4f8f5d66ca11ccfbb99aefdca62b5041d9d96f21a",
	        "Created": "2025-10-25T09:35:54.490444314Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 211082,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:36:31.401083234Z",
	            "FinishedAt": "2025-10-25T09:36:30.407169397Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/e1443cadde6d14168556fbd4f8f5d66ca11ccfbb99aefdca62b5041d9d96f21a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e1443cadde6d14168556fbd4f8f5d66ca11ccfbb99aefdca62b5041d9d96f21a/hostname",
	        "HostsPath": "/var/lib/docker/containers/e1443cadde6d14168556fbd4f8f5d66ca11ccfbb99aefdca62b5041d9d96f21a/hosts",
	        "LogPath": "/var/lib/docker/containers/e1443cadde6d14168556fbd4f8f5d66ca11ccfbb99aefdca62b5041d9d96f21a/e1443cadde6d14168556fbd4f8f5d66ca11ccfbb99aefdca62b5041d9d96f21a-json.log",
	        "Name": "/newest-cni-052144",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-052144:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-052144",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e1443cadde6d14168556fbd4f8f5d66ca11ccfbb99aefdca62b5041d9d96f21a",
	                "LowerDir": "/var/lib/docker/overlay2/728840ee83faeb82b3599e3ae5f94f455fef7897c1a1e3a5bbf2533eeeba4cf0-init/diff:/var/lib/docker/overlay2/cef79fc16ca3e257f9c9390d34fae091a7f7fa219913b2606caf1922ea50ed93/diff",
	                "MergedDir": "/var/lib/docker/overlay2/728840ee83faeb82b3599e3ae5f94f455fef7897c1a1e3a5bbf2533eeeba4cf0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/728840ee83faeb82b3599e3ae5f94f455fef7897c1a1e3a5bbf2533eeeba4cf0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/728840ee83faeb82b3599e3ae5f94f455fef7897c1a1e3a5bbf2533eeeba4cf0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-052144",
	                "Source": "/var/lib/docker/volumes/newest-cni-052144/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-052144",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-052144",
	                "name.minikube.sigs.k8s.io": "newest-cni-052144",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "028b5bdbc5c84426320d25f0e188bbcbcb9556befdbd285b14d7015374e386e3",
	            "SandboxKey": "/var/run/docker/netns/028b5bdbc5c8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-052144": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0a:b1:b3:1f:24:91",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b84d6961f80e53c1b14499713a231fa50516df29401cc2dc41dc3be0b29a7d71",
	                    "EndpointID": "3fa5360bbc6497b9c63f91478a9b0225774290c01587b21c69ee40a7d18a00e6",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-052144",
	                        "e1443cadde6d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-052144 -n newest-cni-052144
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-052144 -n newest-cni-052144: exit status 2 (350.715244ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-052144 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-052144 logs -n 25: (1.324978556s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p no-preload-179869 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-179869            │ jenkins │ v1.37.0 │ 25 Oct 25 09:33 UTC │                     │
	│ stop    │ -p no-preload-179869 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-179869            │ jenkins │ v1.37.0 │ 25 Oct 25 09:33 UTC │ 25 Oct 25 09:34 UTC │
	│ addons  │ enable dashboard -p no-preload-179869 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-179869            │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │ 25 Oct 25 09:34 UTC │
	│ start   │ -p no-preload-179869 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-179869            │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │ 25 Oct 25 09:34 UTC │
	│ addons  │ enable metrics-server -p embed-certs-173264 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-173264           │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │                     │
	│ stop    │ -p embed-certs-173264 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-173264           │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │ 25 Oct 25 09:34 UTC │
	│ addons  │ enable dashboard -p embed-certs-173264 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-173264           │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │ 25 Oct 25 09:34 UTC │
	│ start   │ -p embed-certs-173264 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-173264           │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │ 25 Oct 25 09:35 UTC │
	│ image   │ no-preload-179869 image list --format=json                                                                                                                                                                                                    │ no-preload-179869            │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
	│ pause   │ -p no-preload-179869 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-179869            │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │                     │
	│ delete  │ -p no-preload-179869                                                                                                                                                                                                                          │ no-preload-179869            │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
	│ delete  │ -p no-preload-179869                                                                                                                                                                                                                          │ no-preload-179869            │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
	│ delete  │ -p disable-driver-mounts-901717                                                                                                                                                                                                               │ disable-driver-mounts-901717 │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
	│ start   │ -p default-k8s-diff-port-666079 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-666079 │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:36 UTC │
	│ image   │ embed-certs-173264 image list --format=json                                                                                                                                                                                                   │ embed-certs-173264           │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
	│ pause   │ -p embed-certs-173264 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-173264           │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │                     │
	│ delete  │ -p embed-certs-173264                                                                                                                                                                                                                         │ embed-certs-173264           │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
	│ delete  │ -p embed-certs-173264                                                                                                                                                                                                                         │ embed-certs-173264           │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
	│ start   │ -p newest-cni-052144 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-052144            │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:36 UTC │
	│ addons  │ enable metrics-server -p newest-cni-052144 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-052144            │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │                     │
	│ stop    │ -p newest-cni-052144 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-052144            │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │ 25 Oct 25 09:36 UTC │
	│ addons  │ enable dashboard -p newest-cni-052144 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-052144            │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │ 25 Oct 25 09:36 UTC │
	│ start   │ -p newest-cni-052144 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-052144            │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │ 25 Oct 25 09:36 UTC │
	│ image   │ newest-cni-052144 image list --format=json                                                                                                                                                                                                    │ newest-cni-052144            │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │ 25 Oct 25 09:36 UTC │
	│ pause   │ -p newest-cni-052144 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-052144            │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:36:31
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:36:31.103350  210957 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:36:31.103484  210957 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:36:31.103500  210957 out.go:374] Setting ErrFile to fd 2...
	I1025 09:36:31.103505  210957 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:36:31.103778  210957 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-2312/.minikube/bin
	I1025 09:36:31.104229  210957 out.go:368] Setting JSON to false
	I1025 09:36:31.105171  210957 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4742,"bootTime":1761380249,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1025 09:36:31.105252  210957 start.go:141] virtualization:  
	I1025 09:36:31.108611  210957 out.go:179] * [newest-cni-052144] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 09:36:31.112744  210957 out.go:179]   - MINIKUBE_LOCATION=21796
	I1025 09:36:31.112792  210957 notify.go:220] Checking for updates...
	I1025 09:36:31.119064  210957 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:36:31.122069  210957 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21796-2312/kubeconfig
	I1025 09:36:31.125089  210957 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-2312/.minikube
	I1025 09:36:31.128012  210957 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 09:36:31.131086  210957 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:36:31.134588  210957 config.go:182] Loaded profile config "newest-cni-052144": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:36:31.135180  210957 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:36:31.168323  210957 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 09:36:31.168452  210957 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:36:31.230075  210957 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 09:36:31.219630379 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:36:31.230230  210957 docker.go:318] overlay module found
	I1025 09:36:31.235280  210957 out.go:179] * Using the docker driver based on existing profile
	I1025 09:36:31.238112  210957 start.go:305] selected driver: docker
	I1025 09:36:31.238132  210957 start.go:925] validating driver "docker" against &{Name:newest-cni-052144 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-052144 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:36:31.238245  210957 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:36:31.238951  210957 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:36:31.303016  210957 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 09:36:31.293102137 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:36:31.303358  210957 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1025 09:36:31.303392  210957 cni.go:84] Creating CNI manager for ""
	I1025 09:36:31.303447  210957 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:36:31.303485  210957 start.go:349] cluster config:
	{Name:newest-cni-052144 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-052144 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:36:31.306604  210957 out.go:179] * Starting "newest-cni-052144" primary control-plane node in "newest-cni-052144" cluster
	I1025 09:36:31.309469  210957 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 09:36:31.312459  210957 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:36:31.315299  210957 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:36:31.315358  210957 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21796-2312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1025 09:36:31.315370  210957 cache.go:58] Caching tarball of preloaded images
	I1025 09:36:31.315469  210957 preload.go:233] Found /home/jenkins/minikube-integration/21796-2312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 09:36:31.315493  210957 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 09:36:31.315604  210957 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/newest-cni-052144/config.json ...
	I1025 09:36:31.315827  210957 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:36:31.335097  210957 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 09:36:31.335120  210957 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 09:36:31.335139  210957 cache.go:232] Successfully downloaded all kic artifacts
	I1025 09:36:31.335162  210957 start.go:360] acquireMachinesLock for newest-cni-052144: {Name:mkdc11ad68e6ad5dad60c6abaa6ced1c93cec008 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:36:31.335220  210957 start.go:364] duration metric: took 35.906µs to acquireMachinesLock for "newest-cni-052144"
	I1025 09:36:31.335243  210957 start.go:96] Skipping create...Using existing machine configuration
	I1025 09:36:31.335249  210957 fix.go:54] fixHost starting: 
	I1025 09:36:31.335521  210957 cli_runner.go:164] Run: docker container inspect newest-cni-052144 --format={{.State.Status}}
	I1025 09:36:31.367229  210957 fix.go:112] recreateIfNeeded on newest-cni-052144: state=Stopped err=<nil>
	W1025 09:36:31.367257  210957 fix.go:138] unexpected machine state, will restart: <nil>
	W1025 09:36:28.850785  203993 node_ready.go:57] node "default-k8s-diff-port-666079" has "Ready":"False" status (will retry)
	W1025 09:36:30.852113  203993 node_ready.go:57] node "default-k8s-diff-port-666079" has "Ready":"False" status (will retry)
	I1025 09:36:31.370532  210957 out.go:252] * Restarting existing docker container for "newest-cni-052144" ...
	I1025 09:36:31.370613  210957 cli_runner.go:164] Run: docker start newest-cni-052144
	I1025 09:36:31.623230  210957 cli_runner.go:164] Run: docker container inspect newest-cni-052144 --format={{.State.Status}}
	I1025 09:36:31.646378  210957 kic.go:430] container "newest-cni-052144" state is running.
	I1025 09:36:31.646782  210957 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-052144
	I1025 09:36:31.670310  210957 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/newest-cni-052144/config.json ...
	I1025 09:36:31.670531  210957 machine.go:93] provisionDockerMachine start ...
	I1025 09:36:31.670592  210957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-052144
	I1025 09:36:31.691579  210957 main.go:141] libmachine: Using SSH client type: native
	I1025 09:36:31.691903  210957 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1025 09:36:31.691912  210957 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:36:31.692551  210957 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1025 09:36:34.845762  210957 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-052144
	
	I1025 09:36:34.845796  210957 ubuntu.go:182] provisioning hostname "newest-cni-052144"
	I1025 09:36:34.845857  210957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-052144
	I1025 09:36:34.868440  210957 main.go:141] libmachine: Using SSH client type: native
	I1025 09:36:34.868747  210957 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1025 09:36:34.868766  210957 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-052144 && echo "newest-cni-052144" | sudo tee /etc/hostname
	I1025 09:36:35.040716  210957 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-052144
	
	I1025 09:36:35.040795  210957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-052144
	I1025 09:36:35.059243  210957 main.go:141] libmachine: Using SSH client type: native
	I1025 09:36:35.059548  210957 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1025 09:36:35.059571  210957 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-052144' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-052144/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-052144' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:36:35.214347  210957 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:36:35.214383  210957 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21796-2312/.minikube CaCertPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21796-2312/.minikube}
	I1025 09:36:35.214417  210957 ubuntu.go:190] setting up certificates
	I1025 09:36:35.214433  210957 provision.go:84] configureAuth start
	I1025 09:36:35.214503  210957 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-052144
	I1025 09:36:35.232141  210957 provision.go:143] copyHostCerts
	I1025 09:36:35.232217  210957 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-2312/.minikube/key.pem, removing ...
	I1025 09:36:35.232237  210957 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-2312/.minikube/key.pem
	I1025 09:36:35.232323  210957 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21796-2312/.minikube/key.pem (1679 bytes)
	I1025 09:36:35.232434  210957 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-2312/.minikube/ca.pem, removing ...
	I1025 09:36:35.232445  210957 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-2312/.minikube/ca.pem
	I1025 09:36:35.232473  210957 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21796-2312/.minikube/ca.pem (1082 bytes)
	I1025 09:36:35.232541  210957 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-2312/.minikube/cert.pem, removing ...
	I1025 09:36:35.232551  210957 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-2312/.minikube/cert.pem
	I1025 09:36:35.232576  210957 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21796-2312/.minikube/cert.pem (1123 bytes)
	I1025 09:36:35.232640  210957 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21796-2312/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca-key.pem org=jenkins.newest-cni-052144 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-052144]
	I1025 09:36:35.642545  210957 provision.go:177] copyRemoteCerts
	I1025 09:36:35.642620  210957 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:36:35.642659  210957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-052144
	I1025 09:36:35.660797  210957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/newest-cni-052144/id_rsa Username:docker}
	I1025 09:36:35.769871  210957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 09:36:35.787154  210957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1025 09:36:35.805460  210957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 09:36:35.822647  210957 provision.go:87] duration metric: took 608.197056ms to configureAuth
	I1025 09:36:35.822672  210957 ubuntu.go:206] setting minikube options for container-runtime
	I1025 09:36:35.822881  210957 config.go:182] Loaded profile config "newest-cni-052144": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:36:35.822988  210957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-052144
	I1025 09:36:35.841876  210957 main.go:141] libmachine: Using SSH client type: native
	I1025 09:36:35.842219  210957 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1025 09:36:35.842239  210957 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1025 09:36:33.350398  203993 node_ready.go:57] node "default-k8s-diff-port-666079" has "Ready":"False" status (will retry)
	W1025 09:36:35.350727  203993 node_ready.go:57] node "default-k8s-diff-port-666079" has "Ready":"False" status (will retry)
	I1025 09:36:36.164807  210957 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:36:36.164880  210957 machine.go:96] duration metric: took 4.494332204s to provisionDockerMachine
	I1025 09:36:36.164909  210957 start.go:293] postStartSetup for "newest-cni-052144" (driver="docker")
	I1025 09:36:36.164950  210957 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:36:36.165037  210957 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:36:36.165128  210957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-052144
	I1025 09:36:36.183620  210957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/newest-cni-052144/id_rsa Username:docker}
	I1025 09:36:36.290102  210957 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:36:36.293429  210957 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 09:36:36.293457  210957 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 09:36:36.293469  210957 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-2312/.minikube/addons for local assets ...
	I1025 09:36:36.293524  210957 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-2312/.minikube/files for local assets ...
	I1025 09:36:36.293604  210957 filesync.go:149] local asset: /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem -> 41102.pem in /etc/ssl/certs
	I1025 09:36:36.293707  210957 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 09:36:36.301851  210957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem --> /etc/ssl/certs/41102.pem (1708 bytes)
	I1025 09:36:36.319954  210957 start.go:296] duration metric: took 155.015654ms for postStartSetup
	I1025 09:36:36.320048  210957 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:36:36.320090  210957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-052144
	I1025 09:36:36.337184  210957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/newest-cni-052144/id_rsa Username:docker}
	I1025 09:36:36.438989  210957 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 09:36:36.443593  210957 fix.go:56] duration metric: took 5.10833821s for fixHost
	I1025 09:36:36.443614  210957 start.go:83] releasing machines lock for "newest-cni-052144", held for 5.108382224s
	I1025 09:36:36.443680  210957 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-052144
	I1025 09:36:36.460965  210957 ssh_runner.go:195] Run: cat /version.json
	I1025 09:36:36.461014  210957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-052144
	I1025 09:36:36.461289  210957 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:36:36.461362  210957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-052144
	I1025 09:36:36.486335  210957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/newest-cni-052144/id_rsa Username:docker}
	I1025 09:36:36.487663  210957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/newest-cni-052144/id_rsa Username:docker}
	I1025 09:36:36.589856  210957 ssh_runner.go:195] Run: systemctl --version
	I1025 09:36:36.712376  210957 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:36:36.749326  210957 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:36:36.754089  210957 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:36:36.754168  210957 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:36:36.762864  210957 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 09:36:36.762893  210957 start.go:495] detecting cgroup driver to use...
	I1025 09:36:36.762925  210957 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 09:36:36.762975  210957 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:36:36.778638  210957 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:36:36.791393  210957 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:36:36.791502  210957 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:36:36.807485  210957 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:36:36.822700  210957 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:36:36.951668  210957 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:36:37.072707  210957 docker.go:234] disabling docker service ...
	I1025 09:36:37.072777  210957 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:36:37.088287  210957 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:36:37.101317  210957 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:36:37.225709  210957 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:36:37.352724  210957 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:36:37.367462  210957 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:36:37.384526  210957 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 09:36:37.384631  210957 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:36:37.393867  210957 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 09:36:37.393947  210957 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:36:37.403185  210957 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:36:37.412322  210957 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:36:37.421215  210957 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:36:37.432241  210957 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:36:37.441129  210957 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:36:37.449725  210957 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:36:37.459065  210957 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:36:37.466851  210957 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:36:37.474237  210957 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:36:37.583290  210957 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:36:37.715219  210957 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:36:37.715299  210957 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:36:37.719459  210957 start.go:563] Will wait 60s for crictl version
	I1025 09:36:37.719564  210957 ssh_runner.go:195] Run: which crictl
	I1025 09:36:37.723333  210957 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 09:36:37.751547  210957 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 09:36:37.751637  210957 ssh_runner.go:195] Run: crio --version
	I1025 09:36:37.779261  210957 ssh_runner.go:195] Run: crio --version
	I1025 09:36:37.811652  210957 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 09:36:37.814623  210957 cli_runner.go:164] Run: docker network inspect newest-cni-052144 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:36:37.838994  210957 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1025 09:36:37.844404  210957 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:36:37.859012  210957 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1025 09:36:37.861855  210957 kubeadm.go:883] updating cluster {Name:newest-cni-052144 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-052144 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 09:36:37.862078  210957 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:36:37.862159  210957 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:36:37.897970  210957 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:36:37.898021  210957 crio.go:433] Images already preloaded, skipping extraction
	I1025 09:36:37.898078  210957 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:36:37.928661  210957 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:36:37.928685  210957 cache_images.go:85] Images are preloaded, skipping loading
	I1025 09:36:37.928693  210957 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1025 09:36:37.928793  210957 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-052144 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-052144 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 09:36:37.928892  210957 ssh_runner.go:195] Run: crio config
	I1025 09:36:37.999333  210957 cni.go:84] Creating CNI manager for ""
	I1025 09:36:37.999360  210957 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:36:37.999387  210957 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1025 09:36:37.999415  210957 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-052144 NodeName:newest-cni-052144 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:36:37.999586  210957 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-052144"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:36:37.999669  210957 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 09:36:38.009877  210957 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 09:36:38.010009  210957 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:36:38.019074  210957 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1025 09:36:38.035330  210957 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:36:38.053906  210957 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1025 09:36:38.069747  210957 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1025 09:36:38.074634  210957 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:36:38.087360  210957 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:36:38.222703  210957 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:36:38.260299  210957 certs.go:69] Setting up /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/newest-cni-052144 for IP: 192.168.85.2
	I1025 09:36:38.260321  210957 certs.go:195] generating shared ca certs ...
	I1025 09:36:38.260336  210957 certs.go:227] acquiring lock for ca certs: {Name:mke81a91ad41248e67f7acba9fb2e71e1c110e7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:36:38.260469  210957 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21796-2312/.minikube/ca.key
	I1025 09:36:38.260515  210957 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.key
	I1025 09:36:38.260527  210957 certs.go:257] generating profile certs ...
	I1025 09:36:38.260607  210957 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/newest-cni-052144/client.key
	I1025 09:36:38.260685  210957 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/newest-cni-052144/apiserver.key.45317619
	I1025 09:36:38.260735  210957 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/newest-cni-052144/proxy-client.key
	I1025 09:36:38.260859  210957 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/4110.pem (1338 bytes)
	W1025 09:36:38.260899  210957 certs.go:480] ignoring /home/jenkins/minikube-integration/21796-2312/.minikube/certs/4110_empty.pem, impossibly tiny 0 bytes
	I1025 09:36:38.260908  210957 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 09:36:38.260938  210957 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem (1082 bytes)
	I1025 09:36:38.260965  210957 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:36:38.260992  210957 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/key.pem (1679 bytes)
	I1025 09:36:38.261040  210957 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem (1708 bytes)
	I1025 09:36:38.261675  210957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:36:38.288512  210957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 09:36:38.312796  210957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:36:38.335802  210957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 09:36:38.403349  210957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/newest-cni-052144/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1025 09:36:38.479137  210957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/newest-cni-052144/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 09:36:38.528489  210957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/newest-cni-052144/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:36:38.556505  210957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/newest-cni-052144/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1025 09:36:38.584695  210957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:36:38.617414  210957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/certs/4110.pem --> /usr/share/ca-certificates/4110.pem (1338 bytes)
	I1025 09:36:38.640238  210957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem --> /usr/share/ca-certificates/41102.pem (1708 bytes)
	I1025 09:36:38.660694  210957 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:36:38.689281  210957 ssh_runner.go:195] Run: openssl version
	I1025 09:36:38.696257  210957 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:36:38.711246  210957 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:36:38.715437  210957 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:36:38.715556  210957 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:36:38.760860  210957 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:36:38.769703  210957 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4110.pem && ln -fs /usr/share/ca-certificates/4110.pem /etc/ssl/certs/4110.pem"
	I1025 09:36:38.779399  210957 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4110.pem
	I1025 09:36:38.783412  210957 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 08:37 /usr/share/ca-certificates/4110.pem
	I1025 09:36:38.783525  210957 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4110.pem
	I1025 09:36:38.824954  210957 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4110.pem /etc/ssl/certs/51391683.0"
	I1025 09:36:38.833108  210957 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41102.pem && ln -fs /usr/share/ca-certificates/41102.pem /etc/ssl/certs/41102.pem"
	I1025 09:36:38.841353  210957 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41102.pem
	I1025 09:36:38.845398  210957 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 08:37 /usr/share/ca-certificates/41102.pem
	I1025 09:36:38.845516  210957 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41102.pem
	I1025 09:36:38.895744  210957 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41102.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 09:36:38.905166  210957 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:36:38.909610  210957 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 09:36:38.959088  210957 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 09:36:39.062861  210957 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 09:36:39.144081  210957 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 09:36:39.217495  210957 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 09:36:39.283846  210957 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 09:36:39.330138  210957 kubeadm.go:400] StartCluster: {Name:newest-cni-052144 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-052144 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:36:39.330239  210957 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:36:39.330303  210957 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:36:39.360521  210957 cri.go:89] found id: "d7070563a28b2ab73806b945b9883656a057f27f61629f911a4ed809c987d519"
	I1025 09:36:39.360545  210957 cri.go:89] found id: "22b38a011d6083a5d52b1656049438d0d0df32d5b7a4981c40343c7ca6b279c4"
	I1025 09:36:39.360550  210957 cri.go:89] found id: "5ae99446ae6aedbea3baa1c22e2f1ff0346551a5136113c7579f0e09d070e253"
	I1025 09:36:39.360554  210957 cri.go:89] found id: "1debff741ebda89c6f5555bf50231cbd526f0d6d17047a2dfd254dad44fe064c"
	I1025 09:36:39.360557  210957 cri.go:89] found id: ""
	I1025 09:36:39.360612  210957 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 09:36:39.371278  210957 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:36:39Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:36:39.371368  210957 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 09:36:39.379807  210957 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 09:36:39.379836  210957 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 09:36:39.379886  210957 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 09:36:39.398560  210957 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 09:36:39.399157  210957 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-052144" does not appear in /home/jenkins/minikube-integration/21796-2312/kubeconfig
	I1025 09:36:39.399411  210957 kubeconfig.go:62] /home/jenkins/minikube-integration/21796-2312/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-052144" cluster setting kubeconfig missing "newest-cni-052144" context setting]
	I1025 09:36:39.399871  210957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/kubeconfig: {Name:mk29749a0edbb941c188588ea6aef25a517ab571 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:36:39.401189  210957 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 09:36:39.408844  210957 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1025 09:36:39.408877  210957 kubeadm.go:601] duration metric: took 29.034073ms to restartPrimaryControlPlane
	I1025 09:36:39.408896  210957 kubeadm.go:402] duration metric: took 78.767348ms to StartCluster
	I1025 09:36:39.408911  210957 settings.go:142] acquiring lock: {Name:mk0d8a31751f5e359928da7d7271367bd4b397fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:36:39.408971  210957 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21796-2312/kubeconfig
	I1025 09:36:39.409931  210957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/kubeconfig: {Name:mk29749a0edbb941c188588ea6aef25a517ab571 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:36:39.410232  210957 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:36:39.410606  210957 config.go:182] Loaded profile config "newest-cni-052144": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:36:39.410639  210957 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 09:36:39.410737  210957 addons.go:69] Setting dashboard=true in profile "newest-cni-052144"
	I1025 09:36:39.410746  210957 addons.go:69] Setting default-storageclass=true in profile "newest-cni-052144"
	I1025 09:36:39.410751  210957 addons.go:238] Setting addon dashboard=true in "newest-cni-052144"
	I1025 09:36:39.410757  210957 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-052144"
	I1025 09:36:39.410737  210957 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-052144"
	I1025 09:36:39.410775  210957 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-052144"
	W1025 09:36:39.410781  210957 addons.go:247] addon storage-provisioner should already be in state true
	I1025 09:36:39.410803  210957 host.go:66] Checking if "newest-cni-052144" exists ...
	I1025 09:36:39.411063  210957 cli_runner.go:164] Run: docker container inspect newest-cni-052144 --format={{.State.Status}}
	I1025 09:36:39.411245  210957 cli_runner.go:164] Run: docker container inspect newest-cni-052144 --format={{.State.Status}}
	W1025 09:36:39.410758  210957 addons.go:247] addon dashboard should already be in state true
	I1025 09:36:39.412869  210957 host.go:66] Checking if "newest-cni-052144" exists ...
	I1025 09:36:39.413332  210957 cli_runner.go:164] Run: docker container inspect newest-cni-052144 --format={{.State.Status}}
	I1025 09:36:39.415691  210957 out.go:179] * Verifying Kubernetes components...
	I1025 09:36:39.418902  210957 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:36:39.466146  210957 addons.go:238] Setting addon default-storageclass=true in "newest-cni-052144"
	W1025 09:36:39.466169  210957 addons.go:247] addon default-storageclass should already be in state true
	I1025 09:36:39.466194  210957 host.go:66] Checking if "newest-cni-052144" exists ...
	I1025 09:36:39.466613  210957 cli_runner.go:164] Run: docker container inspect newest-cni-052144 --format={{.State.Status}}
	I1025 09:36:39.479623  210957 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 09:36:39.482511  210957 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:36:39.482532  210957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 09:36:39.482597  210957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-052144
	I1025 09:36:39.488583  210957 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1025 09:36:39.491528  210957 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1025 09:36:39.494347  210957 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1025 09:36:39.494374  210957 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1025 09:36:39.494439  210957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-052144
	I1025 09:36:39.520516  210957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/newest-cni-052144/id_rsa Username:docker}
	I1025 09:36:39.541484  210957 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 09:36:39.541511  210957 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 09:36:39.541576  210957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-052144
	I1025 09:36:39.551308  210957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/newest-cni-052144/id_rsa Username:docker}
	I1025 09:36:39.578133  210957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/newest-cni-052144/id_rsa Username:docker}
	I1025 09:36:39.855979  210957 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1025 09:36:39.856006  210957 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1025 09:36:39.876203  210957 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:36:39.894915  210957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:36:39.903983  210957 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:36:39.904069  210957 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:36:39.910391  210957 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1025 09:36:39.910415  210957 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1025 09:36:39.941444  210957 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1025 09:36:39.941469  210957 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1025 09:36:39.946533  210957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 09:36:40.008968  210957 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1025 09:36:40.009046  210957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1025 09:36:40.087612  210957 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1025 09:36:40.087684  210957 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1025 09:36:40.164783  210957 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1025 09:36:40.164861  210957 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1025 09:36:40.208822  210957 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1025 09:36:40.208924  210957 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1025 09:36:40.263093  210957 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1025 09:36:40.263157  210957 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1025 09:36:40.283423  210957 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 09:36:40.283494  210957 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1025 09:36:40.307151  210957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1025 09:36:37.850138  203993 node_ready.go:57] node "default-k8s-diff-port-666079" has "Ready":"False" status (will retry)
	I1025 09:36:38.351269  203993 node_ready.go:49] node "default-k8s-diff-port-666079" is "Ready"
	I1025 09:36:38.351301  203993 node_ready.go:38] duration metric: took 41.004470382s for node "default-k8s-diff-port-666079" to be "Ready" ...
	I1025 09:36:38.351315  203993 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:36:38.351372  203993 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:36:38.369823  203993 api_server.go:72] duration metric: took 42.30575869s to wait for apiserver process to appear ...
	I1025 09:36:38.369846  203993 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:36:38.369865  203993 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1025 09:36:38.384968  203993 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1025 09:36:38.386144  203993 api_server.go:141] control plane version: v1.34.1
	I1025 09:36:38.386170  203993 api_server.go:131] duration metric: took 16.314567ms to wait for apiserver health ...
	I1025 09:36:38.386179  203993 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 09:36:38.389526  203993 system_pods.go:59] 8 kube-system pods found
	I1025 09:36:38.389561  203993 system_pods.go:61] "coredns-66bc5c9577-dzmkq" [991a35a6-4303-41e2-b7f8-3d267c5fc2ec] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:36:38.389569  203993 system_pods.go:61] "etcd-default-k8s-diff-port-666079" [48199b7d-3848-41a8-ac12-69176ce87480] Running
	I1025 09:36:38.389575  203993 system_pods.go:61] "kindnet-28vnv" [7efe42d1-6ccc-4898-8927-11f06d512ee1] Running
	I1025 09:36:38.389579  203993 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-666079" [ca0bef19-f719-4f47-b127-ea12a2faa23d] Running
	I1025 09:36:38.389584  203993 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-666079" [c17326de-704e-4ba5-ac46-316407ce00f7] Running
	I1025 09:36:38.389589  203993 system_pods.go:61] "kube-proxy-65j7p" [e9d046e5-ee7b-43a7-b854-2597df0f1432] Running
	I1025 09:36:38.389593  203993 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-666079" [99eaee03-92e9-47a5-8e65-8c20a7a00558] Running
	I1025 09:36:38.389599  203993 system_pods.go:61] "storage-provisioner" [ded1f77d-7f3d-48f0-94ef-13367b475def] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:36:38.389606  203993 system_pods.go:74] duration metric: took 3.420831ms to wait for pod list to return data ...
	I1025 09:36:38.389614  203993 default_sa.go:34] waiting for default service account to be created ...
	I1025 09:36:38.396448  203993 default_sa.go:45] found service account: "default"
	I1025 09:36:38.396470  203993 default_sa.go:55] duration metric: took 6.850589ms for default service account to be created ...
	I1025 09:36:38.396480  203993 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 09:36:38.417677  203993 system_pods.go:86] 8 kube-system pods found
	I1025 09:36:38.417761  203993 system_pods.go:89] "coredns-66bc5c9577-dzmkq" [991a35a6-4303-41e2-b7f8-3d267c5fc2ec] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:36:38.417799  203993 system_pods.go:89] "etcd-default-k8s-diff-port-666079" [48199b7d-3848-41a8-ac12-69176ce87480] Running
	I1025 09:36:38.417810  203993 system_pods.go:89] "kindnet-28vnv" [7efe42d1-6ccc-4898-8927-11f06d512ee1] Running
	I1025 09:36:38.417815  203993 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-666079" [ca0bef19-f719-4f47-b127-ea12a2faa23d] Running
	I1025 09:36:38.417820  203993 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-666079" [c17326de-704e-4ba5-ac46-316407ce00f7] Running
	I1025 09:36:38.417825  203993 system_pods.go:89] "kube-proxy-65j7p" [e9d046e5-ee7b-43a7-b854-2597df0f1432] Running
	I1025 09:36:38.417830  203993 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-666079" [99eaee03-92e9-47a5-8e65-8c20a7a00558] Running
	I1025 09:36:38.417838  203993 system_pods.go:89] "storage-provisioner" [ded1f77d-7f3d-48f0-94ef-13367b475def] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:36:38.417889  203993 retry.go:31] will retry after 264.659639ms: missing components: kube-dns
	I1025 09:36:38.706017  203993 system_pods.go:86] 8 kube-system pods found
	I1025 09:36:38.706046  203993 system_pods.go:89] "coredns-66bc5c9577-dzmkq" [991a35a6-4303-41e2-b7f8-3d267c5fc2ec] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:36:38.706053  203993 system_pods.go:89] "etcd-default-k8s-diff-port-666079" [48199b7d-3848-41a8-ac12-69176ce87480] Running
	I1025 09:36:38.706059  203993 system_pods.go:89] "kindnet-28vnv" [7efe42d1-6ccc-4898-8927-11f06d512ee1] Running
	I1025 09:36:38.706064  203993 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-666079" [ca0bef19-f719-4f47-b127-ea12a2faa23d] Running
	I1025 09:36:38.706068  203993 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-666079" [c17326de-704e-4ba5-ac46-316407ce00f7] Running
	I1025 09:36:38.706072  203993 system_pods.go:89] "kube-proxy-65j7p" [e9d046e5-ee7b-43a7-b854-2597df0f1432] Running
	I1025 09:36:38.706076  203993 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-666079" [99eaee03-92e9-47a5-8e65-8c20a7a00558] Running
	I1025 09:36:38.706083  203993 system_pods.go:89] "storage-provisioner" [ded1f77d-7f3d-48f0-94ef-13367b475def] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:36:38.706097  203993 retry.go:31] will retry after 380.355508ms: missing components: kube-dns
	I1025 09:36:39.091191  203993 system_pods.go:86] 8 kube-system pods found
	I1025 09:36:39.091224  203993 system_pods.go:89] "coredns-66bc5c9577-dzmkq" [991a35a6-4303-41e2-b7f8-3d267c5fc2ec] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:36:39.091261  203993 system_pods.go:89] "etcd-default-k8s-diff-port-666079" [48199b7d-3848-41a8-ac12-69176ce87480] Running
	I1025 09:36:39.091269  203993 system_pods.go:89] "kindnet-28vnv" [7efe42d1-6ccc-4898-8927-11f06d512ee1] Running
	I1025 09:36:39.091273  203993 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-666079" [ca0bef19-f719-4f47-b127-ea12a2faa23d] Running
	I1025 09:36:39.091278  203993 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-666079" [c17326de-704e-4ba5-ac46-316407ce00f7] Running
	I1025 09:36:39.091282  203993 system_pods.go:89] "kube-proxy-65j7p" [e9d046e5-ee7b-43a7-b854-2597df0f1432] Running
	I1025 09:36:39.091286  203993 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-666079" [99eaee03-92e9-47a5-8e65-8c20a7a00558] Running
	I1025 09:36:39.091291  203993 system_pods.go:89] "storage-provisioner" [ded1f77d-7f3d-48f0-94ef-13367b475def] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:36:39.091306  203993 retry.go:31] will retry after 469.185972ms: missing components: kube-dns
	I1025 09:36:39.591027  203993 system_pods.go:86] 8 kube-system pods found
	I1025 09:36:39.591055  203993 system_pods.go:89] "coredns-66bc5c9577-dzmkq" [991a35a6-4303-41e2-b7f8-3d267c5fc2ec] Running
	I1025 09:36:39.591063  203993 system_pods.go:89] "etcd-default-k8s-diff-port-666079" [48199b7d-3848-41a8-ac12-69176ce87480] Running
	I1025 09:36:39.591068  203993 system_pods.go:89] "kindnet-28vnv" [7efe42d1-6ccc-4898-8927-11f06d512ee1] Running
	I1025 09:36:39.591073  203993 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-666079" [ca0bef19-f719-4f47-b127-ea12a2faa23d] Running
	I1025 09:36:39.591078  203993 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-666079" [c17326de-704e-4ba5-ac46-316407ce00f7] Running
	I1025 09:36:39.591081  203993 system_pods.go:89] "kube-proxy-65j7p" [e9d046e5-ee7b-43a7-b854-2597df0f1432] Running
	I1025 09:36:39.591085  203993 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-666079" [99eaee03-92e9-47a5-8e65-8c20a7a00558] Running
	I1025 09:36:39.591089  203993 system_pods.go:89] "storage-provisioner" [ded1f77d-7f3d-48f0-94ef-13367b475def] Running
	I1025 09:36:39.591097  203993 system_pods.go:126] duration metric: took 1.19461026s to wait for k8s-apps to be running ...
	I1025 09:36:39.591105  203993 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 09:36:39.591160  203993 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:36:39.620213  203993 system_svc.go:56] duration metric: took 29.097853ms WaitForService to wait for kubelet
	I1025 09:36:39.620239  203993 kubeadm.go:586] duration metric: took 43.556178257s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:36:39.620256  203993 node_conditions.go:102] verifying NodePressure condition ...
	I1025 09:36:39.641042  203993 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 09:36:39.641128  203993 node_conditions.go:123] node cpu capacity is 2
	I1025 09:36:39.641144  203993 node_conditions.go:105] duration metric: took 20.882447ms to run NodePressure ...
	I1025 09:36:39.641157  203993 start.go:241] waiting for startup goroutines ...
	I1025 09:36:39.641164  203993 start.go:246] waiting for cluster config update ...
	I1025 09:36:39.641175  203993 start.go:255] writing updated cluster config ...
	I1025 09:36:39.641540  203993 ssh_runner.go:195] Run: rm -f paused
	I1025 09:36:39.648092  203993 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:36:39.660718  203993 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dzmkq" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:36:39.667907  203993 pod_ready.go:94] pod "coredns-66bc5c9577-dzmkq" is "Ready"
	I1025 09:36:39.667985  203993 pod_ready.go:86] duration metric: took 7.240108ms for pod "coredns-66bc5c9577-dzmkq" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:36:39.672226  203993 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-666079" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:36:39.682557  203993 pod_ready.go:94] pod "etcd-default-k8s-diff-port-666079" is "Ready"
	I1025 09:36:39.682634  203993 pod_ready.go:86] duration metric: took 10.330991ms for pod "etcd-default-k8s-diff-port-666079" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:36:39.685592  203993 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-666079" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:36:39.691481  203993 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-666079" is "Ready"
	I1025 09:36:39.691566  203993 pod_ready.go:86] duration metric: took 5.901016ms for pod "kube-apiserver-default-k8s-diff-port-666079" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:36:39.696198  203993 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-666079" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:36:40.053617  203993 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-666079" is "Ready"
	I1025 09:36:40.053645  203993 pod_ready.go:86] duration metric: took 357.371072ms for pod "kube-controller-manager-default-k8s-diff-port-666079" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:36:40.252956  203993 pod_ready.go:83] waiting for pod "kube-proxy-65j7p" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:36:40.653393  203993 pod_ready.go:94] pod "kube-proxy-65j7p" is "Ready"
	I1025 09:36:40.653426  203993 pod_ready.go:86] duration metric: took 400.440546ms for pod "kube-proxy-65j7p" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:36:40.854040  203993 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-666079" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:36:41.253307  203993 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-666079" is "Ready"
	I1025 09:36:41.253335  203993 pod_ready.go:86] duration metric: took 399.265392ms for pod "kube-scheduler-default-k8s-diff-port-666079" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:36:41.253348  203993 pod_ready.go:40] duration metric: took 1.60522578s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:36:41.353285  203993 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1025 09:36:41.357371  203993 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-666079" cluster and "default" namespace by default
	I1025 09:36:45.821926  210957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.926963852s)
	I1025 09:36:45.822007  210957 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.917916183s)
	I1025 09:36:45.822023  210957 api_server.go:72] duration metric: took 6.411758561s to wait for apiserver process to appear ...
	I1025 09:36:45.822033  210957 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:36:45.822050  210957 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:36:45.822359  210957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.875794516s)
	I1025 09:36:45.850819  210957 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 09:36:45.850844  210957 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 09:36:45.889663  210957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.582428505s)
	I1025 09:36:45.892890  210957 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-052144 addons enable metrics-server
	
	I1025 09:36:45.895838  210957 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1025 09:36:45.898778  210957 addons.go:514] duration metric: took 6.48813245s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1025 09:36:46.322707  210957 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:36:46.331370  210957 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1025 09:36:46.332647  210957 api_server.go:141] control plane version: v1.34.1
	I1025 09:36:46.332676  210957 api_server.go:131] duration metric: took 510.63666ms to wait for apiserver health ...
	I1025 09:36:46.332686  210957 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 09:36:46.336167  210957 system_pods.go:59] 8 kube-system pods found
	I1025 09:36:46.336203  210957 system_pods.go:61] "coredns-66bc5c9577-whxdx" [3df2d221-4d0f-4389-b1b1-78c0c980eb77] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1025 09:36:46.336213  210957 system_pods.go:61] "etcd-newest-cni-052144" [a5f918ce-23e0-463a-a637-4ecad2be6163] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 09:36:46.336240  210957 system_pods.go:61] "kindnet-c9wzk" [cf7b8b45-3b46-4a97-8c27-2eca0f408738] Running
	I1025 09:36:46.336254  210957 system_pods.go:61] "kube-apiserver-newest-cni-052144" [c0c3a020-1407-4e0e-9378-3a7d5f49fcd4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 09:36:46.336261  210957 system_pods.go:61] "kube-controller-manager-newest-cni-052144" [3933334f-83c3-43fa-a233-f4931bd7224a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 09:36:46.336266  210957 system_pods.go:61] "kube-proxy-wh72x" [e3f00316-8d1f-4dd3-ad3b-7b973e951dc3] Running
	I1025 09:36:46.336275  210957 system_pods.go:61] "kube-scheduler-newest-cni-052144" [ccba5caf-481b-4b6b-88d0-71e8581766dc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 09:36:46.336281  210957 system_pods.go:61] "storage-provisioner" [c71a8288-c49a-4cf3-a34b-e5b06c1509ac] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1025 09:36:46.336288  210957 system_pods.go:74] duration metric: took 3.595832ms to wait for pod list to return data ...
	I1025 09:36:46.336316  210957 default_sa.go:34] waiting for default service account to be created ...
	I1025 09:36:46.340794  210957 default_sa.go:45] found service account: "default"
	I1025 09:36:46.340869  210957 default_sa.go:55] duration metric: took 4.535273ms for default service account to be created ...
	I1025 09:36:46.340906  210957 kubeadm.go:586] duration metric: took 6.930630187s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1025 09:36:46.340959  210957 node_conditions.go:102] verifying NodePressure condition ...
	I1025 09:36:46.347812  210957 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 09:36:46.347897  210957 node_conditions.go:123] node cpu capacity is 2
	I1025 09:36:46.347923  210957 node_conditions.go:105] duration metric: took 6.944408ms to run NodePressure ...
	I1025 09:36:46.347975  210957 start.go:241] waiting for startup goroutines ...
	I1025 09:36:46.348002  210957 start.go:246] waiting for cluster config update ...
	I1025 09:36:46.348029  210957 start.go:255] writing updated cluster config ...
	I1025 09:36:46.348380  210957 ssh_runner.go:195] Run: rm -f paused
	I1025 09:36:46.441580  210957 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1025 09:36:46.446488  210957 out.go:179] * Done! kubectl is now configured to use "newest-cni-052144" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 25 09:36:44 newest-cni-052144 crio[611]: time="2025-10-25T09:36:44.779746349Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:36:44 newest-cni-052144 crio[611]: time="2025-10-25T09:36:44.782629278Z" level=info msg="Running pod sandbox: kube-system/kindnet-c9wzk/POD" id=e15c3b8d-5e9a-484d-81a6-9c2a2b551217 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:36:44 newest-cni-052144 crio[611]: time="2025-10-25T09:36:44.782687962Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:36:44 newest-cni-052144 crio[611]: time="2025-10-25T09:36:44.798364663Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=298f7eb8-a09f-4610-825c-87cc101b6055 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:36:44 newest-cni-052144 crio[611]: time="2025-10-25T09:36:44.804859851Z" level=info msg="Ran pod sandbox f266bdc37c6d74e1798f63af0442262fcbdaadc0084d69a27f52e166e28f80dd with infra container: kube-system/kube-proxy-wh72x/POD" id=298f7eb8-a09f-4610-825c-87cc101b6055 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:36:44 newest-cni-052144 crio[611]: time="2025-10-25T09:36:44.80956297Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=e15c3b8d-5e9a-484d-81a6-9c2a2b551217 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:36:44 newest-cni-052144 crio[611]: time="2025-10-25T09:36:44.813764651Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=ac2c62b4-4234-4df4-89c9-19f9e27b2d36 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:36:44 newest-cni-052144 crio[611]: time="2025-10-25T09:36:44.819307492Z" level=info msg="Ran pod sandbox f0aff115d7827dd9e38e0a74c2d3b5c0f432e092cdb2230f16e036aaa25799ab with infra container: kube-system/kindnet-c9wzk/POD" id=e15c3b8d-5e9a-484d-81a6-9c2a2b551217 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:36:44 newest-cni-052144 crio[611]: time="2025-10-25T09:36:44.821691913Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=fa6f2ffc-0585-457a-bd13-63e895f65c7b name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:36:44 newest-cni-052144 crio[611]: time="2025-10-25T09:36:44.82423664Z" level=info msg="Creating container: kube-system/kube-proxy-wh72x/kube-proxy" id=d93c363e-c6db-4ea7-9117-3e9cfe939915 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:36:44 newest-cni-052144 crio[611]: time="2025-10-25T09:36:44.824453685Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:36:44 newest-cni-052144 crio[611]: time="2025-10-25T09:36:44.825210945Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=ce427135-de5b-486b-acc2-6bcdb87c3650 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:36:44 newest-cni-052144 crio[611]: time="2025-10-25T09:36:44.832139631Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=5c816104-0d4b-439c-8e51-730a06a709e6 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:36:44 newest-cni-052144 crio[611]: time="2025-10-25T09:36:44.83310459Z" level=info msg="Creating container: kube-system/kindnet-c9wzk/kindnet-cni" id=3fbd6773-d767-40e2-8bef-1eb4a18dc6cf name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:36:44 newest-cni-052144 crio[611]: time="2025-10-25T09:36:44.833187881Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:36:44 newest-cni-052144 crio[611]: time="2025-10-25T09:36:44.850362331Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:36:44 newest-cni-052144 crio[611]: time="2025-10-25T09:36:44.85097577Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:36:44 newest-cni-052144 crio[611]: time="2025-10-25T09:36:44.853787601Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:36:44 newest-cni-052144 crio[611]: time="2025-10-25T09:36:44.857607879Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:36:44 newest-cni-052144 crio[611]: time="2025-10-25T09:36:44.882919324Z" level=info msg="Created container 1be8c9f0f660b7b829177a15ea343cd85eab19806d6a017c49f04c41ee9c815f: kube-system/kindnet-c9wzk/kindnet-cni" id=3fbd6773-d767-40e2-8bef-1eb4a18dc6cf name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:36:44 newest-cni-052144 crio[611]: time="2025-10-25T09:36:44.883719292Z" level=info msg="Starting container: 1be8c9f0f660b7b829177a15ea343cd85eab19806d6a017c49f04c41ee9c815f" id=5f3129b0-eeb7-4ad3-b7b6-377b3629c5e0 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:36:44 newest-cni-052144 crio[611]: time="2025-10-25T09:36:44.888738304Z" level=info msg="Started container" PID=1059 containerID=1be8c9f0f660b7b829177a15ea343cd85eab19806d6a017c49f04c41ee9c815f description=kube-system/kindnet-c9wzk/kindnet-cni id=5f3129b0-eeb7-4ad3-b7b6-377b3629c5e0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f0aff115d7827dd9e38e0a74c2d3b5c0f432e092cdb2230f16e036aaa25799ab
	Oct 25 09:36:44 newest-cni-052144 crio[611]: time="2025-10-25T09:36:44.941034912Z" level=info msg="Created container 10b980ab399dae84d11c3f4759ec594c676a369d118101e64cee4812c3180c5a: kube-system/kube-proxy-wh72x/kube-proxy" id=d93c363e-c6db-4ea7-9117-3e9cfe939915 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:36:44 newest-cni-052144 crio[611]: time="2025-10-25T09:36:44.942066678Z" level=info msg="Starting container: 10b980ab399dae84d11c3f4759ec594c676a369d118101e64cee4812c3180c5a" id=69e33451-234e-43af-85e8-785737d23100 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:36:44 newest-cni-052144 crio[611]: time="2025-10-25T09:36:44.975906878Z" level=info msg="Started container" PID=1066 containerID=10b980ab399dae84d11c3f4759ec594c676a369d118101e64cee4812c3180c5a description=kube-system/kube-proxy-wh72x/kube-proxy id=69e33451-234e-43af-85e8-785737d23100 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f266bdc37c6d74e1798f63af0442262fcbdaadc0084d69a27f52e166e28f80dd
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	10b980ab399da       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   5 seconds ago       Running             kube-proxy                1                   f266bdc37c6d7       kube-proxy-wh72x                            kube-system
	1be8c9f0f660b       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   5 seconds ago       Running             kindnet-cni               1                   f0aff115d7827       kindnet-c9wzk                               kube-system
	d7070563a28b2       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   11 seconds ago      Running             kube-scheduler            1                   a897aa8c45d7e       kube-scheduler-newest-cni-052144            kube-system
	22b38a011d608       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   11 seconds ago      Running             kube-controller-manager   1                   2e4f7e987b286       kube-controller-manager-newest-cni-052144   kube-system
	5ae99446ae6ae       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   11 seconds ago      Running             etcd                      1                   2c03444d7cda0       etcd-newest-cni-052144                      kube-system
	1debff741ebda       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   11 seconds ago      Running             kube-apiserver            1                   7ab135d24da74       kube-apiserver-newest-cni-052144            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-052144
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-052144
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3de5ce1075e776fa55a26fe8396669cc53a4373
	                    minikube.k8s.io/name=newest-cni-052144
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_36_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:36:18 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-052144
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:36:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:36:44 +0000   Sat, 25 Oct 2025 09:36:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:36:44 +0000   Sat, 25 Oct 2025 09:36:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:36:44 +0000   Sat, 25 Oct 2025 09:36:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 25 Oct 2025 09:36:44 +0000   Sat, 25 Oct 2025 09:36:14 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-052144
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                520ddce7-f680-41ee-9a5a-5efd431b826c
	  Boot ID:                    89aa6167-e700-462e-8969-7739a9f33dc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-052144                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         29s
	  kube-system                 kindnet-c9wzk                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-newest-cni-052144             250m (12%)    0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-newest-cni-052144    200m (10%)    0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-proxy-wh72x                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-newest-cni-052144             100m (5%)     0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 22s                kube-proxy       
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  36s (x8 over 36s)  kubelet          Node newest-cni-052144 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    36s (x8 over 36s)  kubelet          Node newest-cni-052144 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     36s (x8 over 36s)  kubelet          Node newest-cni-052144 status is now: NodeHasSufficientPID
	  Normal   Starting                 29s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 29s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  29s                kubelet          Node newest-cni-052144 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    29s                kubelet          Node newest-cni-052144 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     29s                kubelet          Node newest-cni-052144 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           25s                node-controller  Node newest-cni-052144 event: Registered Node newest-cni-052144 in Controller
	  Normal   RegisteredNode           3s                 node-controller  Node newest-cni-052144 event: Registered Node newest-cni-052144 in Controller
	
	
	==> dmesg <==
	[ +13.271347] overlayfs: idmapped layers are currently not supported
	[Oct25 09:14] overlayfs: idmapped layers are currently not supported
	[Oct25 09:15] overlayfs: idmapped layers are currently not supported
	[ +40.253798] overlayfs: idmapped layers are currently not supported
	[Oct25 09:16] overlayfs: idmapped layers are currently not supported
	[Oct25 09:17] overlayfs: idmapped layers are currently not supported
	[Oct25 09:18] overlayfs: idmapped layers are currently not supported
	[  +9.191019] hrtimer: interrupt took 13462946 ns
	[Oct25 09:20] overlayfs: idmapped layers are currently not supported
	[Oct25 09:22] overlayfs: idmapped layers are currently not supported
	[Oct25 09:23] overlayfs: idmapped layers are currently not supported
	[Oct25 09:24] overlayfs: idmapped layers are currently not supported
	[Oct25 09:28] overlayfs: idmapped layers are currently not supported
	[ +37.283444] overlayfs: idmapped layers are currently not supported
	[Oct25 09:29] overlayfs: idmapped layers are currently not supported
	[ +38.328802] overlayfs: idmapped layers are currently not supported
	[Oct25 09:30] overlayfs: idmapped layers are currently not supported
	[Oct25 09:31] overlayfs: idmapped layers are currently not supported
	[Oct25 09:32] overlayfs: idmapped layers are currently not supported
	[Oct25 09:33] overlayfs: idmapped layers are currently not supported
	[Oct25 09:34] overlayfs: idmapped layers are currently not supported
	[ +28.289706] overlayfs: idmapped layers are currently not supported
	[Oct25 09:35] overlayfs: idmapped layers are currently not supported
	[Oct25 09:36] overlayfs: idmapped layers are currently not supported
	[ +24.160248] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [5ae99446ae6aedbea3baa1c22e2f1ff0346551a5136113c7579f0e09d070e253] <==
	{"level":"warn","ts":"2025-10-25T09:36:42.757345Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:42.782498Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:42.799989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:42.848309Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:42.860917Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:42.868253Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:42.886751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:42.902726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:42.946758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:42.965127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:42.973161Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:42.994889Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:43.010941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:43.032105Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:43.057889Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:43.073749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:43.087863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:43.106832Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:43.141274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:43.173450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:43.206061Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:43.236155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:43.247012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:43.269736Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:43.340977Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48644","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:36:50 up  1:19,  0 user,  load average: 3.58, 3.76, 3.04
	Linux newest-cni-052144 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1be8c9f0f660b7b829177a15ea343cd85eab19806d6a017c49f04c41ee9c815f] <==
	I1025 09:36:45.018332       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 09:36:45.018601       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1025 09:36:45.018716       1 main.go:148] setting mtu 1500 for CNI 
	I1025 09:36:45.018728       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 09:36:45.018745       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T09:36:45Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 09:36:45.255789       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 09:36:45.255819       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 09:36:45.255830       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 09:36:45.256379       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [1debff741ebda89c6f5555bf50231cbd526f0d6d17047a2dfd254dad44fe064c] <==
	I1025 09:36:44.579498       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 09:36:44.596035       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1025 09:36:44.596135       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1025 09:36:44.596177       1 policy_source.go:240] refreshing policies
	I1025 09:36:44.596241       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1025 09:36:44.596435       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:36:44.599634       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1025 09:36:44.599977       1 aggregator.go:171] initial CRD sync complete...
	I1025 09:36:44.601160       1 autoregister_controller.go:144] Starting autoregister controller
	I1025 09:36:44.601225       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 09:36:44.601257       1 cache.go:39] Caches are synced for autoregister controller
	I1025 09:36:44.635850       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 09:36:44.656962       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	E1025 09:36:44.727597       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1025 09:36:45.097443       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 09:36:45.451606       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 09:36:45.536029       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 09:36:45.651294       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 09:36:45.691896       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 09:36:45.860749       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.90.171"}
	I1025 09:36:45.882823       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.238.122"}
	I1025 09:36:48.051214       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 09:36:48.150820       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 09:36:48.308292       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1025 09:36:48.352746       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [22b38a011d6083a5d52b1656049438d0d0df32d5b7a4981c40343c7ca6b279c4] <==
	I1025 09:36:47.793746       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1025 09:36:47.794864       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1025 09:36:47.795238       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1025 09:36:47.795273       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1025 09:36:47.795302       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1025 09:36:47.799855       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1025 09:36:47.799923       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1025 09:36:47.802341       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1025 09:36:47.802906       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1025 09:36:47.805050       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1025 09:36:47.809651       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1025 09:36:47.810052       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1025 09:36:47.817958       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1025 09:36:47.821194       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:36:47.839695       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1025 09:36:47.844530       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1025 09:36:47.846871       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:36:47.846893       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 09:36:47.846901       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 09:36:47.848955       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1025 09:36:47.850169       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:36:47.850248       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:36:47.850318       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1025 09:36:47.865265       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1025 09:36:47.905531       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	
	
	==> kube-proxy [10b980ab399dae84d11c3f4759ec594c676a369d118101e64cee4812c3180c5a] <==
	I1025 09:36:45.096546       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:36:45.352560       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:36:45.552220       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:36:45.552343       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1025 09:36:45.552452       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:36:45.799878       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:36:45.799999       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:36:45.833520       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:36:45.834175       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:36:45.834249       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:36:45.837616       1 config.go:200] "Starting service config controller"
	I1025 09:36:45.837704       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:36:45.837763       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:36:45.837790       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:36:45.837843       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:36:45.837871       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:36:45.838818       1 config.go:309] "Starting node config controller"
	I1025 09:36:45.838892       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:36:45.838922       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:36:45.938029       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 09:36:45.938063       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 09:36:45.938092       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [d7070563a28b2ab73806b945b9883656a057f27f61629f911a4ed809c987d519] <==
	I1025 09:36:42.644297       1 serving.go:386] Generated self-signed cert in-memory
	W1025 09:36:44.241364       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 09:36:44.241396       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 09:36:44.241409       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 09:36:44.241416       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 09:36:44.409878       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 09:36:44.409918       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:36:44.413308       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 09:36:44.426164       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:36:44.427988       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:36:44.428029       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 09:36:44.631815       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 09:36:41 newest-cni-052144 kubelet[727]: E1025 09:36:41.534780     727 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-052144\" not found" node="newest-cni-052144"
	Oct 25 09:36:42 newest-cni-052144 kubelet[727]: E1025 09:36:42.380374     727 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-052144\" not found" node="newest-cni-052144"
	Oct 25 09:36:44 newest-cni-052144 kubelet[727]: I1025 09:36:44.280870     727 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-052144"
	Oct 25 09:36:44 newest-cni-052144 kubelet[727]: I1025 09:36:44.349656     727 apiserver.go:52] "Watching apiserver"
	Oct 25 09:36:44 newest-cni-052144 kubelet[727]: I1025 09:36:44.480685     727 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 25 09:36:44 newest-cni-052144 kubelet[727]: I1025 09:36:44.487231     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/cf7b8b45-3b46-4a97-8c27-2eca0f408738-cni-cfg\") pod \"kindnet-c9wzk\" (UID: \"cf7b8b45-3b46-4a97-8c27-2eca0f408738\") " pod="kube-system/kindnet-c9wzk"
	Oct 25 09:36:44 newest-cni-052144 kubelet[727]: I1025 09:36:44.487504     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cf7b8b45-3b46-4a97-8c27-2eca0f408738-lib-modules\") pod \"kindnet-c9wzk\" (UID: \"cf7b8b45-3b46-4a97-8c27-2eca0f408738\") " pod="kube-system/kindnet-c9wzk"
	Oct 25 09:36:44 newest-cni-052144 kubelet[727]: I1025 09:36:44.487619     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e3f00316-8d1f-4dd3-ad3b-7b973e951dc3-lib-modules\") pod \"kube-proxy-wh72x\" (UID: \"e3f00316-8d1f-4dd3-ad3b-7b973e951dc3\") " pod="kube-system/kube-proxy-wh72x"
	Oct 25 09:36:44 newest-cni-052144 kubelet[727]: I1025 09:36:44.487715     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cf7b8b45-3b46-4a97-8c27-2eca0f408738-xtables-lock\") pod \"kindnet-c9wzk\" (UID: \"cf7b8b45-3b46-4a97-8c27-2eca0f408738\") " pod="kube-system/kindnet-c9wzk"
	Oct 25 09:36:44 newest-cni-052144 kubelet[727]: I1025 09:36:44.487829     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e3f00316-8d1f-4dd3-ad3b-7b973e951dc3-xtables-lock\") pod \"kube-proxy-wh72x\" (UID: \"e3f00316-8d1f-4dd3-ad3b-7b973e951dc3\") " pod="kube-system/kube-proxy-wh72x"
	Oct 25 09:36:44 newest-cni-052144 kubelet[727]: I1025 09:36:44.698135     727 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 25 09:36:44 newest-cni-052144 kubelet[727]: I1025 09:36:44.728907     727 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-052144"
	Oct 25 09:36:44 newest-cni-052144 kubelet[727]: I1025 09:36:44.729006     727 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-052144"
	Oct 25 09:36:44 newest-cni-052144 kubelet[727]: I1025 09:36:44.729035     727 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 25 09:36:44 newest-cni-052144 kubelet[727]: E1025 09:36:44.729259     727 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-052144\" already exists" pod="kube-system/etcd-newest-cni-052144"
	Oct 25 09:36:44 newest-cni-052144 kubelet[727]: I1025 09:36:44.729275     727 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-052144"
	Oct 25 09:36:44 newest-cni-052144 kubelet[727]: I1025 09:36:44.735608     727 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 25 09:36:44 newest-cni-052144 kubelet[727]: E1025 09:36:44.767229     727 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-052144\" already exists" pod="kube-system/kube-apiserver-newest-cni-052144"
	Oct 25 09:36:44 newest-cni-052144 kubelet[727]: I1025 09:36:44.773566     727 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-052144"
	Oct 25 09:36:44 newest-cni-052144 kubelet[727]: E1025 09:36:44.827579     727 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-052144\" already exists" pod="kube-system/kube-controller-manager-newest-cni-052144"
	Oct 25 09:36:44 newest-cni-052144 kubelet[727]: I1025 09:36:44.827616     727 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-052144"
	Oct 25 09:36:44 newest-cni-052144 kubelet[727]: E1025 09:36:44.839922     727 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-052144\" already exists" pod="kube-system/kube-scheduler-newest-cni-052144"
	Oct 25 09:36:47 newest-cni-052144 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 09:36:47 newest-cni-052144 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 09:36:47 newest-cni-052144 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-052144 -n newest-cni-052144
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-052144 -n newest-cni-052144: exit status 2 (515.457384ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-052144 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-whxdx storage-provisioner dashboard-metrics-scraper-6ffb444bf9-blbcx kubernetes-dashboard-855c9754f9-g7c5b
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-052144 describe pod coredns-66bc5c9577-whxdx storage-provisioner dashboard-metrics-scraper-6ffb444bf9-blbcx kubernetes-dashboard-855c9754f9-g7c5b
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-052144 describe pod coredns-66bc5c9577-whxdx storage-provisioner dashboard-metrics-scraper-6ffb444bf9-blbcx kubernetes-dashboard-855c9754f9-g7c5b: exit status 1 (118.046971ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-whxdx" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-blbcx" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-g7c5b" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-052144 describe pod coredns-66bc5c9577-whxdx storage-provisioner dashboard-metrics-scraper-6ffb444bf9-blbcx kubernetes-dashboard-855c9754f9-g7c5b: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-052144
helpers_test.go:243: (dbg) docker inspect newest-cni-052144:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e1443cadde6d14168556fbd4f8f5d66ca11ccfbb99aefdca62b5041d9d96f21a",
	        "Created": "2025-10-25T09:35:54.490444314Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 211082,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:36:31.401083234Z",
	            "FinishedAt": "2025-10-25T09:36:30.407169397Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/e1443cadde6d14168556fbd4f8f5d66ca11ccfbb99aefdca62b5041d9d96f21a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e1443cadde6d14168556fbd4f8f5d66ca11ccfbb99aefdca62b5041d9d96f21a/hostname",
	        "HostsPath": "/var/lib/docker/containers/e1443cadde6d14168556fbd4f8f5d66ca11ccfbb99aefdca62b5041d9d96f21a/hosts",
	        "LogPath": "/var/lib/docker/containers/e1443cadde6d14168556fbd4f8f5d66ca11ccfbb99aefdca62b5041d9d96f21a/e1443cadde6d14168556fbd4f8f5d66ca11ccfbb99aefdca62b5041d9d96f21a-json.log",
	        "Name": "/newest-cni-052144",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-052144:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-052144",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e1443cadde6d14168556fbd4f8f5d66ca11ccfbb99aefdca62b5041d9d96f21a",
	                "LowerDir": "/var/lib/docker/overlay2/728840ee83faeb82b3599e3ae5f94f455fef7897c1a1e3a5bbf2533eeeba4cf0-init/diff:/var/lib/docker/overlay2/cef79fc16ca3e257f9c9390d34fae091a7f7fa219913b2606caf1922ea50ed93/diff",
	                "MergedDir": "/var/lib/docker/overlay2/728840ee83faeb82b3599e3ae5f94f455fef7897c1a1e3a5bbf2533eeeba4cf0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/728840ee83faeb82b3599e3ae5f94f455fef7897c1a1e3a5bbf2533eeeba4cf0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/728840ee83faeb82b3599e3ae5f94f455fef7897c1a1e3a5bbf2533eeeba4cf0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-052144",
	                "Source": "/var/lib/docker/volumes/newest-cni-052144/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-052144",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-052144",
	                "name.minikube.sigs.k8s.io": "newest-cni-052144",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "028b5bdbc5c84426320d25f0e188bbcbcb9556befdbd285b14d7015374e386e3",
	            "SandboxKey": "/var/run/docker/netns/028b5bdbc5c8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-052144": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0a:b1:b3:1f:24:91",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b84d6961f80e53c1b14499713a231fa50516df29401cc2dc41dc3be0b29a7d71",
	                    "EndpointID": "3fa5360bbc6497b9c63f91478a9b0225774290c01587b21c69ee40a7d18a00e6",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-052144",
	                        "e1443cadde6d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-052144 -n newest-cni-052144
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-052144 -n newest-cni-052144: exit status 2 (454.447717ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-052144 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-052144 logs -n 25: (1.529648828s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p no-preload-179869 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-179869            │ jenkins │ v1.37.0 │ 25 Oct 25 09:33 UTC │ 25 Oct 25 09:34 UTC │
	│ addons  │ enable dashboard -p no-preload-179869 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-179869            │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │ 25 Oct 25 09:34 UTC │
	│ start   │ -p no-preload-179869 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-179869            │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │ 25 Oct 25 09:34 UTC │
	│ addons  │ enable metrics-server -p embed-certs-173264 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-173264           │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │                     │
	│ stop    │ -p embed-certs-173264 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-173264           │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │ 25 Oct 25 09:34 UTC │
	│ addons  │ enable dashboard -p embed-certs-173264 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-173264           │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │ 25 Oct 25 09:34 UTC │
	│ start   │ -p embed-certs-173264 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-173264           │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │ 25 Oct 25 09:35 UTC │
	│ image   │ no-preload-179869 image list --format=json                                                                                                                                                                                                    │ no-preload-179869            │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
	│ pause   │ -p no-preload-179869 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-179869            │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │                     │
	│ delete  │ -p no-preload-179869                                                                                                                                                                                                                          │ no-preload-179869            │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
	│ delete  │ -p no-preload-179869                                                                                                                                                                                                                          │ no-preload-179869            │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
	│ delete  │ -p disable-driver-mounts-901717                                                                                                                                                                                                               │ disable-driver-mounts-901717 │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
	│ start   │ -p default-k8s-diff-port-666079 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-666079 │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:36 UTC │
	│ image   │ embed-certs-173264 image list --format=json                                                                                                                                                                                                   │ embed-certs-173264           │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
	│ pause   │ -p embed-certs-173264 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-173264           │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │                     │
	│ delete  │ -p embed-certs-173264                                                                                                                                                                                                                         │ embed-certs-173264           │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
	│ delete  │ -p embed-certs-173264                                                                                                                                                                                                                         │ embed-certs-173264           │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
	│ start   │ -p newest-cni-052144 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-052144            │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:36 UTC │
	│ addons  │ enable metrics-server -p newest-cni-052144 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-052144            │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │                     │
	│ stop    │ -p newest-cni-052144 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-052144            │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │ 25 Oct 25 09:36 UTC │
	│ addons  │ enable dashboard -p newest-cni-052144 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-052144            │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │ 25 Oct 25 09:36 UTC │
	│ start   │ -p newest-cni-052144 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-052144            │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │ 25 Oct 25 09:36 UTC │
	│ image   │ newest-cni-052144 image list --format=json                                                                                                                                                                                                    │ newest-cni-052144            │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │ 25 Oct 25 09:36 UTC │
	│ pause   │ -p newest-cni-052144 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-052144            │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-666079 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-666079 │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:36:31
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:36:31.103350  210957 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:36:31.103484  210957 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:36:31.103500  210957 out.go:374] Setting ErrFile to fd 2...
	I1025 09:36:31.103505  210957 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:36:31.103778  210957 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-2312/.minikube/bin
	I1025 09:36:31.104229  210957 out.go:368] Setting JSON to false
	I1025 09:36:31.105171  210957 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4742,"bootTime":1761380249,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1025 09:36:31.105252  210957 start.go:141] virtualization:  
	I1025 09:36:31.108611  210957 out.go:179] * [newest-cni-052144] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 09:36:31.112744  210957 out.go:179]   - MINIKUBE_LOCATION=21796
	I1025 09:36:31.112792  210957 notify.go:220] Checking for updates...
	I1025 09:36:31.119064  210957 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:36:31.122069  210957 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21796-2312/kubeconfig
	I1025 09:36:31.125089  210957 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-2312/.minikube
	I1025 09:36:31.128012  210957 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 09:36:31.131086  210957 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:36:31.134588  210957 config.go:182] Loaded profile config "newest-cni-052144": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:36:31.135180  210957 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:36:31.168323  210957 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 09:36:31.168452  210957 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:36:31.230075  210957 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 09:36:31.219630379 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:36:31.230230  210957 docker.go:318] overlay module found
	I1025 09:36:31.235280  210957 out.go:179] * Using the docker driver based on existing profile
	I1025 09:36:31.238112  210957 start.go:305] selected driver: docker
	I1025 09:36:31.238132  210957 start.go:925] validating driver "docker" against &{Name:newest-cni-052144 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-052144 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:36:31.238245  210957 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:36:31.238951  210957 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:36:31.303016  210957 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 09:36:31.293102137 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:36:31.303358  210957 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1025 09:36:31.303392  210957 cni.go:84] Creating CNI manager for ""
	I1025 09:36:31.303447  210957 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:36:31.303485  210957 start.go:349] cluster config:
	{Name:newest-cni-052144 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-052144 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:36:31.306604  210957 out.go:179] * Starting "newest-cni-052144" primary control-plane node in "newest-cni-052144" cluster
	I1025 09:36:31.309469  210957 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 09:36:31.312459  210957 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:36:31.315299  210957 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:36:31.315358  210957 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21796-2312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1025 09:36:31.315370  210957 cache.go:58] Caching tarball of preloaded images
	I1025 09:36:31.315469  210957 preload.go:233] Found /home/jenkins/minikube-integration/21796-2312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 09:36:31.315493  210957 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 09:36:31.315604  210957 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/newest-cni-052144/config.json ...
	I1025 09:36:31.315827  210957 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:36:31.335097  210957 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 09:36:31.335120  210957 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 09:36:31.335139  210957 cache.go:232] Successfully downloaded all kic artifacts
	I1025 09:36:31.335162  210957 start.go:360] acquireMachinesLock for newest-cni-052144: {Name:mkdc11ad68e6ad5dad60c6abaa6ced1c93cec008 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:36:31.335220  210957 start.go:364] duration metric: took 35.906µs to acquireMachinesLock for "newest-cni-052144"
	I1025 09:36:31.335243  210957 start.go:96] Skipping create...Using existing machine configuration
	I1025 09:36:31.335249  210957 fix.go:54] fixHost starting: 
	I1025 09:36:31.335521  210957 cli_runner.go:164] Run: docker container inspect newest-cni-052144 --format={{.State.Status}}
	I1025 09:36:31.367229  210957 fix.go:112] recreateIfNeeded on newest-cni-052144: state=Stopped err=<nil>
	W1025 09:36:31.367257  210957 fix.go:138] unexpected machine state, will restart: <nil>
	W1025 09:36:28.850785  203993 node_ready.go:57] node "default-k8s-diff-port-666079" has "Ready":"False" status (will retry)
	W1025 09:36:30.852113  203993 node_ready.go:57] node "default-k8s-diff-port-666079" has "Ready":"False" status (will retry)
	I1025 09:36:31.370532  210957 out.go:252] * Restarting existing docker container for "newest-cni-052144" ...
	I1025 09:36:31.370613  210957 cli_runner.go:164] Run: docker start newest-cni-052144
	I1025 09:36:31.623230  210957 cli_runner.go:164] Run: docker container inspect newest-cni-052144 --format={{.State.Status}}
	I1025 09:36:31.646378  210957 kic.go:430] container "newest-cni-052144" state is running.
	I1025 09:36:31.646782  210957 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-052144
	I1025 09:36:31.670310  210957 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/newest-cni-052144/config.json ...
	I1025 09:36:31.670531  210957 machine.go:93] provisionDockerMachine start ...
	I1025 09:36:31.670592  210957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-052144
	I1025 09:36:31.691579  210957 main.go:141] libmachine: Using SSH client type: native
	I1025 09:36:31.691903  210957 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1025 09:36:31.691912  210957 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:36:31.692551  210957 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1025 09:36:34.845762  210957 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-052144
	
	I1025 09:36:34.845796  210957 ubuntu.go:182] provisioning hostname "newest-cni-052144"
	I1025 09:36:34.845857  210957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-052144
	I1025 09:36:34.868440  210957 main.go:141] libmachine: Using SSH client type: native
	I1025 09:36:34.868747  210957 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1025 09:36:34.868766  210957 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-052144 && echo "newest-cni-052144" | sudo tee /etc/hostname
	I1025 09:36:35.040716  210957 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-052144
	
	I1025 09:36:35.040795  210957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-052144
	I1025 09:36:35.059243  210957 main.go:141] libmachine: Using SSH client type: native
	I1025 09:36:35.059548  210957 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1025 09:36:35.059571  210957 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-052144' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-052144/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-052144' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:36:35.214347  210957 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:36:35.214383  210957 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21796-2312/.minikube CaCertPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21796-2312/.minikube}
	I1025 09:36:35.214417  210957 ubuntu.go:190] setting up certificates
	I1025 09:36:35.214433  210957 provision.go:84] configureAuth start
	I1025 09:36:35.214503  210957 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-052144
	I1025 09:36:35.232141  210957 provision.go:143] copyHostCerts
	I1025 09:36:35.232217  210957 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-2312/.minikube/key.pem, removing ...
	I1025 09:36:35.232237  210957 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-2312/.minikube/key.pem
	I1025 09:36:35.232323  210957 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21796-2312/.minikube/key.pem (1679 bytes)
	I1025 09:36:35.232434  210957 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-2312/.minikube/ca.pem, removing ...
	I1025 09:36:35.232445  210957 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-2312/.minikube/ca.pem
	I1025 09:36:35.232473  210957 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21796-2312/.minikube/ca.pem (1082 bytes)
	I1025 09:36:35.232541  210957 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-2312/.minikube/cert.pem, removing ...
	I1025 09:36:35.232551  210957 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-2312/.minikube/cert.pem
	I1025 09:36:35.232576  210957 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21796-2312/.minikube/cert.pem (1123 bytes)
	I1025 09:36:35.232640  210957 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21796-2312/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca-key.pem org=jenkins.newest-cni-052144 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-052144]
	I1025 09:36:35.642545  210957 provision.go:177] copyRemoteCerts
	I1025 09:36:35.642620  210957 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:36:35.642659  210957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-052144
	I1025 09:36:35.660797  210957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/newest-cni-052144/id_rsa Username:docker}
	I1025 09:36:35.769871  210957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 09:36:35.787154  210957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1025 09:36:35.805460  210957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 09:36:35.822647  210957 provision.go:87] duration metric: took 608.197056ms to configureAuth
	I1025 09:36:35.822672  210957 ubuntu.go:206] setting minikube options for container-runtime
	I1025 09:36:35.822881  210957 config.go:182] Loaded profile config "newest-cni-052144": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:36:35.822988  210957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-052144
	I1025 09:36:35.841876  210957 main.go:141] libmachine: Using SSH client type: native
	I1025 09:36:35.842219  210957 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1025 09:36:35.842239  210957 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1025 09:36:33.350398  203993 node_ready.go:57] node "default-k8s-diff-port-666079" has "Ready":"False" status (will retry)
	W1025 09:36:35.350727  203993 node_ready.go:57] node "default-k8s-diff-port-666079" has "Ready":"False" status (will retry)
	I1025 09:36:36.164807  210957 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:36:36.164880  210957 machine.go:96] duration metric: took 4.494332204s to provisionDockerMachine
	I1025 09:36:36.164909  210957 start.go:293] postStartSetup for "newest-cni-052144" (driver="docker")
	I1025 09:36:36.164950  210957 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:36:36.165037  210957 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:36:36.165128  210957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-052144
	I1025 09:36:36.183620  210957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/newest-cni-052144/id_rsa Username:docker}
	I1025 09:36:36.290102  210957 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:36:36.293429  210957 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 09:36:36.293457  210957 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 09:36:36.293469  210957 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-2312/.minikube/addons for local assets ...
	I1025 09:36:36.293524  210957 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-2312/.minikube/files for local assets ...
	I1025 09:36:36.293604  210957 filesync.go:149] local asset: /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem -> 41102.pem in /etc/ssl/certs
	I1025 09:36:36.293707  210957 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 09:36:36.301851  210957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem --> /etc/ssl/certs/41102.pem (1708 bytes)
	I1025 09:36:36.319954  210957 start.go:296] duration metric: took 155.015654ms for postStartSetup
	I1025 09:36:36.320048  210957 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:36:36.320090  210957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-052144
	I1025 09:36:36.337184  210957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/newest-cni-052144/id_rsa Username:docker}
	I1025 09:36:36.438989  210957 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 09:36:36.443593  210957 fix.go:56] duration metric: took 5.10833821s for fixHost
	I1025 09:36:36.443614  210957 start.go:83] releasing machines lock for "newest-cni-052144", held for 5.108382224s
	I1025 09:36:36.443680  210957 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-052144
	I1025 09:36:36.460965  210957 ssh_runner.go:195] Run: cat /version.json
	I1025 09:36:36.461014  210957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-052144
	I1025 09:36:36.461289  210957 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:36:36.461362  210957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-052144
	I1025 09:36:36.486335  210957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/newest-cni-052144/id_rsa Username:docker}
	I1025 09:36:36.487663  210957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/newest-cni-052144/id_rsa Username:docker}
	I1025 09:36:36.589856  210957 ssh_runner.go:195] Run: systemctl --version
	I1025 09:36:36.712376  210957 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:36:36.749326  210957 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:36:36.754089  210957 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:36:36.754168  210957 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:36:36.762864  210957 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 09:36:36.762893  210957 start.go:495] detecting cgroup driver to use...
	I1025 09:36:36.762925  210957 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 09:36:36.762975  210957 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:36:36.778638  210957 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:36:36.791393  210957 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:36:36.791502  210957 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:36:36.807485  210957 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:36:36.822700  210957 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:36:36.951668  210957 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:36:37.072707  210957 docker.go:234] disabling docker service ...
	I1025 09:36:37.072777  210957 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:36:37.088287  210957 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:36:37.101317  210957 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:36:37.225709  210957 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:36:37.352724  210957 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:36:37.367462  210957 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:36:37.384526  210957 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 09:36:37.384631  210957 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:36:37.393867  210957 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 09:36:37.393947  210957 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:36:37.403185  210957 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:36:37.412322  210957 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:36:37.421215  210957 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:36:37.432241  210957 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:36:37.441129  210957 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:36:37.449725  210957 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:36:37.459065  210957 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:36:37.466851  210957 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:36:37.474237  210957 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:36:37.583290  210957 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:36:37.715219  210957 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:36:37.715299  210957 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:36:37.719459  210957 start.go:563] Will wait 60s for crictl version
	I1025 09:36:37.719564  210957 ssh_runner.go:195] Run: which crictl
	I1025 09:36:37.723333  210957 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 09:36:37.751547  210957 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 09:36:37.751637  210957 ssh_runner.go:195] Run: crio --version
	I1025 09:36:37.779261  210957 ssh_runner.go:195] Run: crio --version
	I1025 09:36:37.811652  210957 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 09:36:37.814623  210957 cli_runner.go:164] Run: docker network inspect newest-cni-052144 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:36:37.838994  210957 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1025 09:36:37.844404  210957 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:36:37.859012  210957 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1025 09:36:37.861855  210957 kubeadm.go:883] updating cluster {Name:newest-cni-052144 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-052144 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 09:36:37.862078  210957 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:36:37.862159  210957 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:36:37.897970  210957 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:36:37.898021  210957 crio.go:433] Images already preloaded, skipping extraction
	I1025 09:36:37.898078  210957 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:36:37.928661  210957 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:36:37.928685  210957 cache_images.go:85] Images are preloaded, skipping loading
	I1025 09:36:37.928693  210957 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1025 09:36:37.928793  210957 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-052144 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-052144 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 09:36:37.928892  210957 ssh_runner.go:195] Run: crio config
	I1025 09:36:37.999333  210957 cni.go:84] Creating CNI manager for ""
	I1025 09:36:37.999360  210957 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:36:37.999387  210957 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1025 09:36:37.999415  210957 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-052144 NodeName:newest-cni-052144 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:36:37.999586  210957 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-052144"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:36:37.999669  210957 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 09:36:38.009877  210957 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 09:36:38.010009  210957 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:36:38.019074  210957 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1025 09:36:38.035330  210957 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:36:38.053906  210957 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1025 09:36:38.069747  210957 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1025 09:36:38.074634  210957 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:36:38.087360  210957 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:36:38.222703  210957 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:36:38.260299  210957 certs.go:69] Setting up /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/newest-cni-052144 for IP: 192.168.85.2
	I1025 09:36:38.260321  210957 certs.go:195] generating shared ca certs ...
	I1025 09:36:38.260336  210957 certs.go:227] acquiring lock for ca certs: {Name:mke81a91ad41248e67f7acba9fb2e71e1c110e7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:36:38.260469  210957 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21796-2312/.minikube/ca.key
	I1025 09:36:38.260515  210957 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.key
	I1025 09:36:38.260527  210957 certs.go:257] generating profile certs ...
	I1025 09:36:38.260607  210957 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/newest-cni-052144/client.key
	I1025 09:36:38.260685  210957 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/newest-cni-052144/apiserver.key.45317619
	I1025 09:36:38.260735  210957 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/newest-cni-052144/proxy-client.key
	I1025 09:36:38.260859  210957 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/4110.pem (1338 bytes)
	W1025 09:36:38.260899  210957 certs.go:480] ignoring /home/jenkins/minikube-integration/21796-2312/.minikube/certs/4110_empty.pem, impossibly tiny 0 bytes
	I1025 09:36:38.260908  210957 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 09:36:38.260938  210957 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem (1082 bytes)
	I1025 09:36:38.260965  210957 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:36:38.260992  210957 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/key.pem (1679 bytes)
	I1025 09:36:38.261040  210957 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem (1708 bytes)
	I1025 09:36:38.261675  210957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:36:38.288512  210957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 09:36:38.312796  210957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:36:38.335802  210957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 09:36:38.403349  210957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/newest-cni-052144/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1025 09:36:38.479137  210957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/newest-cni-052144/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 09:36:38.528489  210957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/newest-cni-052144/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:36:38.556505  210957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/newest-cni-052144/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1025 09:36:38.584695  210957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:36:38.617414  210957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/certs/4110.pem --> /usr/share/ca-certificates/4110.pem (1338 bytes)
	I1025 09:36:38.640238  210957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem --> /usr/share/ca-certificates/41102.pem (1708 bytes)
	I1025 09:36:38.660694  210957 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:36:38.689281  210957 ssh_runner.go:195] Run: openssl version
	I1025 09:36:38.696257  210957 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:36:38.711246  210957 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:36:38.715437  210957 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:36:38.715556  210957 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:36:38.760860  210957 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:36:38.769703  210957 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4110.pem && ln -fs /usr/share/ca-certificates/4110.pem /etc/ssl/certs/4110.pem"
	I1025 09:36:38.779399  210957 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4110.pem
	I1025 09:36:38.783412  210957 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 08:37 /usr/share/ca-certificates/4110.pem
	I1025 09:36:38.783525  210957 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4110.pem
	I1025 09:36:38.824954  210957 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4110.pem /etc/ssl/certs/51391683.0"
	I1025 09:36:38.833108  210957 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41102.pem && ln -fs /usr/share/ca-certificates/41102.pem /etc/ssl/certs/41102.pem"
	I1025 09:36:38.841353  210957 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41102.pem
	I1025 09:36:38.845398  210957 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 08:37 /usr/share/ca-certificates/41102.pem
	I1025 09:36:38.845516  210957 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41102.pem
	I1025 09:36:38.895744  210957 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41102.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 09:36:38.905166  210957 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:36:38.909610  210957 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 09:36:38.959088  210957 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 09:36:39.062861  210957 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 09:36:39.144081  210957 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 09:36:39.217495  210957 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 09:36:39.283846  210957 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 09:36:39.330138  210957 kubeadm.go:400] StartCluster: {Name:newest-cni-052144 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-052144 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:36:39.330239  210957 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:36:39.330303  210957 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:36:39.360521  210957 cri.go:89] found id: "d7070563a28b2ab73806b945b9883656a057f27f61629f911a4ed809c987d519"
	I1025 09:36:39.360545  210957 cri.go:89] found id: "22b38a011d6083a5d52b1656049438d0d0df32d5b7a4981c40343c7ca6b279c4"
	I1025 09:36:39.360550  210957 cri.go:89] found id: "5ae99446ae6aedbea3baa1c22e2f1ff0346551a5136113c7579f0e09d070e253"
	I1025 09:36:39.360554  210957 cri.go:89] found id: "1debff741ebda89c6f5555bf50231cbd526f0d6d17047a2dfd254dad44fe064c"
	I1025 09:36:39.360557  210957 cri.go:89] found id: ""
	I1025 09:36:39.360612  210957 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 09:36:39.371278  210957 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:36:39Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:36:39.371368  210957 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 09:36:39.379807  210957 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 09:36:39.379836  210957 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 09:36:39.379886  210957 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 09:36:39.398560  210957 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 09:36:39.399157  210957 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-052144" does not appear in /home/jenkins/minikube-integration/21796-2312/kubeconfig
	I1025 09:36:39.399411  210957 kubeconfig.go:62] /home/jenkins/minikube-integration/21796-2312/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-052144" cluster setting kubeconfig missing "newest-cni-052144" context setting]
	I1025 09:36:39.399871  210957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/kubeconfig: {Name:mk29749a0edbb941c188588ea6aef25a517ab571 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:36:39.401189  210957 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 09:36:39.408844  210957 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1025 09:36:39.408877  210957 kubeadm.go:601] duration metric: took 29.034073ms to restartPrimaryControlPlane
	I1025 09:36:39.408896  210957 kubeadm.go:402] duration metric: took 78.767348ms to StartCluster
	I1025 09:36:39.408911  210957 settings.go:142] acquiring lock: {Name:mk0d8a31751f5e359928da7d7271367bd4b397fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:36:39.408971  210957 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21796-2312/kubeconfig
	I1025 09:36:39.409931  210957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/kubeconfig: {Name:mk29749a0edbb941c188588ea6aef25a517ab571 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:36:39.410232  210957 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:36:39.410606  210957 config.go:182] Loaded profile config "newest-cni-052144": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:36:39.410639  210957 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 09:36:39.410737  210957 addons.go:69] Setting dashboard=true in profile "newest-cni-052144"
	I1025 09:36:39.410746  210957 addons.go:69] Setting default-storageclass=true in profile "newest-cni-052144"
	I1025 09:36:39.410751  210957 addons.go:238] Setting addon dashboard=true in "newest-cni-052144"
	I1025 09:36:39.410757  210957 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-052144"
	I1025 09:36:39.410737  210957 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-052144"
	I1025 09:36:39.410775  210957 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-052144"
	W1025 09:36:39.410781  210957 addons.go:247] addon storage-provisioner should already be in state true
	I1025 09:36:39.410803  210957 host.go:66] Checking if "newest-cni-052144" exists ...
	I1025 09:36:39.411063  210957 cli_runner.go:164] Run: docker container inspect newest-cni-052144 --format={{.State.Status}}
	I1025 09:36:39.411245  210957 cli_runner.go:164] Run: docker container inspect newest-cni-052144 --format={{.State.Status}}
	W1025 09:36:39.410758  210957 addons.go:247] addon dashboard should already be in state true
	I1025 09:36:39.412869  210957 host.go:66] Checking if "newest-cni-052144" exists ...
	I1025 09:36:39.413332  210957 cli_runner.go:164] Run: docker container inspect newest-cni-052144 --format={{.State.Status}}
	I1025 09:36:39.415691  210957 out.go:179] * Verifying Kubernetes components...
	I1025 09:36:39.418902  210957 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:36:39.466146  210957 addons.go:238] Setting addon default-storageclass=true in "newest-cni-052144"
	W1025 09:36:39.466169  210957 addons.go:247] addon default-storageclass should already be in state true
	I1025 09:36:39.466194  210957 host.go:66] Checking if "newest-cni-052144" exists ...
	I1025 09:36:39.466613  210957 cli_runner.go:164] Run: docker container inspect newest-cni-052144 --format={{.State.Status}}
	I1025 09:36:39.479623  210957 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 09:36:39.482511  210957 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:36:39.482532  210957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 09:36:39.482597  210957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-052144
	I1025 09:36:39.488583  210957 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1025 09:36:39.491528  210957 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1025 09:36:39.494347  210957 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1025 09:36:39.494374  210957 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1025 09:36:39.494439  210957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-052144
	I1025 09:36:39.520516  210957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/newest-cni-052144/id_rsa Username:docker}
	I1025 09:36:39.541484  210957 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 09:36:39.541511  210957 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 09:36:39.541576  210957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-052144
	I1025 09:36:39.551308  210957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/newest-cni-052144/id_rsa Username:docker}
	I1025 09:36:39.578133  210957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/newest-cni-052144/id_rsa Username:docker}
	I1025 09:36:39.855979  210957 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1025 09:36:39.856006  210957 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1025 09:36:39.876203  210957 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:36:39.894915  210957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:36:39.903983  210957 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:36:39.904069  210957 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:36:39.910391  210957 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1025 09:36:39.910415  210957 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1025 09:36:39.941444  210957 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1025 09:36:39.941469  210957 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1025 09:36:39.946533  210957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 09:36:40.008968  210957 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1025 09:36:40.009046  210957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1025 09:36:40.087612  210957 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1025 09:36:40.087684  210957 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1025 09:36:40.164783  210957 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1025 09:36:40.164861  210957 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1025 09:36:40.208822  210957 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1025 09:36:40.208924  210957 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1025 09:36:40.263093  210957 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1025 09:36:40.263157  210957 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1025 09:36:40.283423  210957 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 09:36:40.283494  210957 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1025 09:36:40.307151  210957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1025 09:36:37.850138  203993 node_ready.go:57] node "default-k8s-diff-port-666079" has "Ready":"False" status (will retry)
	I1025 09:36:38.351269  203993 node_ready.go:49] node "default-k8s-diff-port-666079" is "Ready"
	I1025 09:36:38.351301  203993 node_ready.go:38] duration metric: took 41.004470382s for node "default-k8s-diff-port-666079" to be "Ready" ...
	I1025 09:36:38.351315  203993 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:36:38.351372  203993 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:36:38.369823  203993 api_server.go:72] duration metric: took 42.30575869s to wait for apiserver process to appear ...
	I1025 09:36:38.369846  203993 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:36:38.369865  203993 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1025 09:36:38.384968  203993 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1025 09:36:38.386144  203993 api_server.go:141] control plane version: v1.34.1
	I1025 09:36:38.386170  203993 api_server.go:131] duration metric: took 16.314567ms to wait for apiserver health ...
	I1025 09:36:38.386179  203993 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 09:36:38.389526  203993 system_pods.go:59] 8 kube-system pods found
	I1025 09:36:38.389561  203993 system_pods.go:61] "coredns-66bc5c9577-dzmkq" [991a35a6-4303-41e2-b7f8-3d267c5fc2ec] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:36:38.389569  203993 system_pods.go:61] "etcd-default-k8s-diff-port-666079" [48199b7d-3848-41a8-ac12-69176ce87480] Running
	I1025 09:36:38.389575  203993 system_pods.go:61] "kindnet-28vnv" [7efe42d1-6ccc-4898-8927-11f06d512ee1] Running
	I1025 09:36:38.389579  203993 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-666079" [ca0bef19-f719-4f47-b127-ea12a2faa23d] Running
	I1025 09:36:38.389584  203993 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-666079" [c17326de-704e-4ba5-ac46-316407ce00f7] Running
	I1025 09:36:38.389589  203993 system_pods.go:61] "kube-proxy-65j7p" [e9d046e5-ee7b-43a7-b854-2597df0f1432] Running
	I1025 09:36:38.389593  203993 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-666079" [99eaee03-92e9-47a5-8e65-8c20a7a00558] Running
	I1025 09:36:38.389599  203993 system_pods.go:61] "storage-provisioner" [ded1f77d-7f3d-48f0-94ef-13367b475def] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:36:38.389606  203993 system_pods.go:74] duration metric: took 3.420831ms to wait for pod list to return data ...
	I1025 09:36:38.389614  203993 default_sa.go:34] waiting for default service account to be created ...
	I1025 09:36:38.396448  203993 default_sa.go:45] found service account: "default"
	I1025 09:36:38.396470  203993 default_sa.go:55] duration metric: took 6.850589ms for default service account to be created ...
	I1025 09:36:38.396480  203993 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 09:36:38.417677  203993 system_pods.go:86] 8 kube-system pods found
	I1025 09:36:38.417761  203993 system_pods.go:89] "coredns-66bc5c9577-dzmkq" [991a35a6-4303-41e2-b7f8-3d267c5fc2ec] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:36:38.417799  203993 system_pods.go:89] "etcd-default-k8s-diff-port-666079" [48199b7d-3848-41a8-ac12-69176ce87480] Running
	I1025 09:36:38.417810  203993 system_pods.go:89] "kindnet-28vnv" [7efe42d1-6ccc-4898-8927-11f06d512ee1] Running
	I1025 09:36:38.417815  203993 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-666079" [ca0bef19-f719-4f47-b127-ea12a2faa23d] Running
	I1025 09:36:38.417820  203993 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-666079" [c17326de-704e-4ba5-ac46-316407ce00f7] Running
	I1025 09:36:38.417825  203993 system_pods.go:89] "kube-proxy-65j7p" [e9d046e5-ee7b-43a7-b854-2597df0f1432] Running
	I1025 09:36:38.417830  203993 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-666079" [99eaee03-92e9-47a5-8e65-8c20a7a00558] Running
	I1025 09:36:38.417838  203993 system_pods.go:89] "storage-provisioner" [ded1f77d-7f3d-48f0-94ef-13367b475def] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:36:38.417889  203993 retry.go:31] will retry after 264.659639ms: missing components: kube-dns
	I1025 09:36:38.706017  203993 system_pods.go:86] 8 kube-system pods found
	I1025 09:36:38.706046  203993 system_pods.go:89] "coredns-66bc5c9577-dzmkq" [991a35a6-4303-41e2-b7f8-3d267c5fc2ec] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:36:38.706053  203993 system_pods.go:89] "etcd-default-k8s-diff-port-666079" [48199b7d-3848-41a8-ac12-69176ce87480] Running
	I1025 09:36:38.706059  203993 system_pods.go:89] "kindnet-28vnv" [7efe42d1-6ccc-4898-8927-11f06d512ee1] Running
	I1025 09:36:38.706064  203993 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-666079" [ca0bef19-f719-4f47-b127-ea12a2faa23d] Running
	I1025 09:36:38.706068  203993 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-666079" [c17326de-704e-4ba5-ac46-316407ce00f7] Running
	I1025 09:36:38.706072  203993 system_pods.go:89] "kube-proxy-65j7p" [e9d046e5-ee7b-43a7-b854-2597df0f1432] Running
	I1025 09:36:38.706076  203993 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-666079" [99eaee03-92e9-47a5-8e65-8c20a7a00558] Running
	I1025 09:36:38.706083  203993 system_pods.go:89] "storage-provisioner" [ded1f77d-7f3d-48f0-94ef-13367b475def] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:36:38.706097  203993 retry.go:31] will retry after 380.355508ms: missing components: kube-dns
	I1025 09:36:39.091191  203993 system_pods.go:86] 8 kube-system pods found
	I1025 09:36:39.091224  203993 system_pods.go:89] "coredns-66bc5c9577-dzmkq" [991a35a6-4303-41e2-b7f8-3d267c5fc2ec] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:36:39.091261  203993 system_pods.go:89] "etcd-default-k8s-diff-port-666079" [48199b7d-3848-41a8-ac12-69176ce87480] Running
	I1025 09:36:39.091269  203993 system_pods.go:89] "kindnet-28vnv" [7efe42d1-6ccc-4898-8927-11f06d512ee1] Running
	I1025 09:36:39.091273  203993 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-666079" [ca0bef19-f719-4f47-b127-ea12a2faa23d] Running
	I1025 09:36:39.091278  203993 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-666079" [c17326de-704e-4ba5-ac46-316407ce00f7] Running
	I1025 09:36:39.091282  203993 system_pods.go:89] "kube-proxy-65j7p" [e9d046e5-ee7b-43a7-b854-2597df0f1432] Running
	I1025 09:36:39.091286  203993 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-666079" [99eaee03-92e9-47a5-8e65-8c20a7a00558] Running
	I1025 09:36:39.091291  203993 system_pods.go:89] "storage-provisioner" [ded1f77d-7f3d-48f0-94ef-13367b475def] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:36:39.091306  203993 retry.go:31] will retry after 469.185972ms: missing components: kube-dns
	I1025 09:36:39.591027  203993 system_pods.go:86] 8 kube-system pods found
	I1025 09:36:39.591055  203993 system_pods.go:89] "coredns-66bc5c9577-dzmkq" [991a35a6-4303-41e2-b7f8-3d267c5fc2ec] Running
	I1025 09:36:39.591063  203993 system_pods.go:89] "etcd-default-k8s-diff-port-666079" [48199b7d-3848-41a8-ac12-69176ce87480] Running
	I1025 09:36:39.591068  203993 system_pods.go:89] "kindnet-28vnv" [7efe42d1-6ccc-4898-8927-11f06d512ee1] Running
	I1025 09:36:39.591073  203993 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-666079" [ca0bef19-f719-4f47-b127-ea12a2faa23d] Running
	I1025 09:36:39.591078  203993 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-666079" [c17326de-704e-4ba5-ac46-316407ce00f7] Running
	I1025 09:36:39.591081  203993 system_pods.go:89] "kube-proxy-65j7p" [e9d046e5-ee7b-43a7-b854-2597df0f1432] Running
	I1025 09:36:39.591085  203993 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-666079" [99eaee03-92e9-47a5-8e65-8c20a7a00558] Running
	I1025 09:36:39.591089  203993 system_pods.go:89] "storage-provisioner" [ded1f77d-7f3d-48f0-94ef-13367b475def] Running
	I1025 09:36:39.591097  203993 system_pods.go:126] duration metric: took 1.19461026s to wait for k8s-apps to be running ...
	I1025 09:36:39.591105  203993 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 09:36:39.591160  203993 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:36:39.620213  203993 system_svc.go:56] duration metric: took 29.097853ms WaitForService to wait for kubelet
	I1025 09:36:39.620239  203993 kubeadm.go:586] duration metric: took 43.556178257s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:36:39.620256  203993 node_conditions.go:102] verifying NodePressure condition ...
	I1025 09:36:39.641042  203993 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 09:36:39.641128  203993 node_conditions.go:123] node cpu capacity is 2
	I1025 09:36:39.641144  203993 node_conditions.go:105] duration metric: took 20.882447ms to run NodePressure ...
	I1025 09:36:39.641157  203993 start.go:241] waiting for startup goroutines ...
	I1025 09:36:39.641164  203993 start.go:246] waiting for cluster config update ...
	I1025 09:36:39.641175  203993 start.go:255] writing updated cluster config ...
	I1025 09:36:39.641540  203993 ssh_runner.go:195] Run: rm -f paused
	I1025 09:36:39.648092  203993 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:36:39.660718  203993 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dzmkq" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:36:39.667907  203993 pod_ready.go:94] pod "coredns-66bc5c9577-dzmkq" is "Ready"
	I1025 09:36:39.667985  203993 pod_ready.go:86] duration metric: took 7.240108ms for pod "coredns-66bc5c9577-dzmkq" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:36:39.672226  203993 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-666079" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:36:39.682557  203993 pod_ready.go:94] pod "etcd-default-k8s-diff-port-666079" is "Ready"
	I1025 09:36:39.682634  203993 pod_ready.go:86] duration metric: took 10.330991ms for pod "etcd-default-k8s-diff-port-666079" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:36:39.685592  203993 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-666079" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:36:39.691481  203993 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-666079" is "Ready"
	I1025 09:36:39.691566  203993 pod_ready.go:86] duration metric: took 5.901016ms for pod "kube-apiserver-default-k8s-diff-port-666079" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:36:39.696198  203993 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-666079" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:36:40.053617  203993 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-666079" is "Ready"
	I1025 09:36:40.053645  203993 pod_ready.go:86] duration metric: took 357.371072ms for pod "kube-controller-manager-default-k8s-diff-port-666079" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:36:40.252956  203993 pod_ready.go:83] waiting for pod "kube-proxy-65j7p" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:36:40.653393  203993 pod_ready.go:94] pod "kube-proxy-65j7p" is "Ready"
	I1025 09:36:40.653426  203993 pod_ready.go:86] duration metric: took 400.440546ms for pod "kube-proxy-65j7p" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:36:40.854040  203993 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-666079" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:36:41.253307  203993 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-666079" is "Ready"
	I1025 09:36:41.253335  203993 pod_ready.go:86] duration metric: took 399.265392ms for pod "kube-scheduler-default-k8s-diff-port-666079" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:36:41.253348  203993 pod_ready.go:40] duration metric: took 1.60522578s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:36:41.353285  203993 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1025 09:36:41.357371  203993 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-666079" cluster and "default" namespace by default
	I1025 09:36:45.821926  210957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.926963852s)
	I1025 09:36:45.822007  210957 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.917916183s)
	I1025 09:36:45.822023  210957 api_server.go:72] duration metric: took 6.411758561s to wait for apiserver process to appear ...
	I1025 09:36:45.822033  210957 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:36:45.822050  210957 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:36:45.822359  210957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.875794516s)
	I1025 09:36:45.850819  210957 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 09:36:45.850844  210957 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 09:36:45.889663  210957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.582428505s)
	I1025 09:36:45.892890  210957 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-052144 addons enable metrics-server
	
	I1025 09:36:45.895838  210957 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1025 09:36:45.898778  210957 addons.go:514] duration metric: took 6.48813245s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1025 09:36:46.322707  210957 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:36:46.331370  210957 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1025 09:36:46.332647  210957 api_server.go:141] control plane version: v1.34.1
	I1025 09:36:46.332676  210957 api_server.go:131] duration metric: took 510.63666ms to wait for apiserver health ...
	I1025 09:36:46.332686  210957 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 09:36:46.336167  210957 system_pods.go:59] 8 kube-system pods found
	I1025 09:36:46.336203  210957 system_pods.go:61] "coredns-66bc5c9577-whxdx" [3df2d221-4d0f-4389-b1b1-78c0c980eb77] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1025 09:36:46.336213  210957 system_pods.go:61] "etcd-newest-cni-052144" [a5f918ce-23e0-463a-a637-4ecad2be6163] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 09:36:46.336240  210957 system_pods.go:61] "kindnet-c9wzk" [cf7b8b45-3b46-4a97-8c27-2eca0f408738] Running
	I1025 09:36:46.336254  210957 system_pods.go:61] "kube-apiserver-newest-cni-052144" [c0c3a020-1407-4e0e-9378-3a7d5f49fcd4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 09:36:46.336261  210957 system_pods.go:61] "kube-controller-manager-newest-cni-052144" [3933334f-83c3-43fa-a233-f4931bd7224a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 09:36:46.336266  210957 system_pods.go:61] "kube-proxy-wh72x" [e3f00316-8d1f-4dd3-ad3b-7b973e951dc3] Running
	I1025 09:36:46.336275  210957 system_pods.go:61] "kube-scheduler-newest-cni-052144" [ccba5caf-481b-4b6b-88d0-71e8581766dc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 09:36:46.336281  210957 system_pods.go:61] "storage-provisioner" [c71a8288-c49a-4cf3-a34b-e5b06c1509ac] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1025 09:36:46.336288  210957 system_pods.go:74] duration metric: took 3.595832ms to wait for pod list to return data ...
	I1025 09:36:46.336316  210957 default_sa.go:34] waiting for default service account to be created ...
	I1025 09:36:46.340794  210957 default_sa.go:45] found service account: "default"
	I1025 09:36:46.340869  210957 default_sa.go:55] duration metric: took 4.535273ms for default service account to be created ...
	I1025 09:36:46.340906  210957 kubeadm.go:586] duration metric: took 6.930630187s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1025 09:36:46.340959  210957 node_conditions.go:102] verifying NodePressure condition ...
	I1025 09:36:46.347812  210957 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 09:36:46.347897  210957 node_conditions.go:123] node cpu capacity is 2
	I1025 09:36:46.347923  210957 node_conditions.go:105] duration metric: took 6.944408ms to run NodePressure ...
	I1025 09:36:46.347975  210957 start.go:241] waiting for startup goroutines ...
	I1025 09:36:46.348002  210957 start.go:246] waiting for cluster config update ...
	I1025 09:36:46.348029  210957 start.go:255] writing updated cluster config ...
	I1025 09:36:46.348380  210957 ssh_runner.go:195] Run: rm -f paused
	I1025 09:36:46.441580  210957 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1025 09:36:46.446488  210957 out.go:179] * Done! kubectl is now configured to use "newest-cni-052144" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 25 09:36:44 newest-cni-052144 crio[611]: time="2025-10-25T09:36:44.779746349Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:36:44 newest-cni-052144 crio[611]: time="2025-10-25T09:36:44.782629278Z" level=info msg="Running pod sandbox: kube-system/kindnet-c9wzk/POD" id=e15c3b8d-5e9a-484d-81a6-9c2a2b551217 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:36:44 newest-cni-052144 crio[611]: time="2025-10-25T09:36:44.782687962Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:36:44 newest-cni-052144 crio[611]: time="2025-10-25T09:36:44.798364663Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=298f7eb8-a09f-4610-825c-87cc101b6055 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:36:44 newest-cni-052144 crio[611]: time="2025-10-25T09:36:44.804859851Z" level=info msg="Ran pod sandbox f266bdc37c6d74e1798f63af0442262fcbdaadc0084d69a27f52e166e28f80dd with infra container: kube-system/kube-proxy-wh72x/POD" id=298f7eb8-a09f-4610-825c-87cc101b6055 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:36:44 newest-cni-052144 crio[611]: time="2025-10-25T09:36:44.80956297Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=e15c3b8d-5e9a-484d-81a6-9c2a2b551217 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:36:44 newest-cni-052144 crio[611]: time="2025-10-25T09:36:44.813764651Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=ac2c62b4-4234-4df4-89c9-19f9e27b2d36 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:36:44 newest-cni-052144 crio[611]: time="2025-10-25T09:36:44.819307492Z" level=info msg="Ran pod sandbox f0aff115d7827dd9e38e0a74c2d3b5c0f432e092cdb2230f16e036aaa25799ab with infra container: kube-system/kindnet-c9wzk/POD" id=e15c3b8d-5e9a-484d-81a6-9c2a2b551217 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:36:44 newest-cni-052144 crio[611]: time="2025-10-25T09:36:44.821691913Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=fa6f2ffc-0585-457a-bd13-63e895f65c7b name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:36:44 newest-cni-052144 crio[611]: time="2025-10-25T09:36:44.82423664Z" level=info msg="Creating container: kube-system/kube-proxy-wh72x/kube-proxy" id=d93c363e-c6db-4ea7-9117-3e9cfe939915 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:36:44 newest-cni-052144 crio[611]: time="2025-10-25T09:36:44.824453685Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:36:44 newest-cni-052144 crio[611]: time="2025-10-25T09:36:44.825210945Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=ce427135-de5b-486b-acc2-6bcdb87c3650 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:36:44 newest-cni-052144 crio[611]: time="2025-10-25T09:36:44.832139631Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=5c816104-0d4b-439c-8e51-730a06a709e6 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:36:44 newest-cni-052144 crio[611]: time="2025-10-25T09:36:44.83310459Z" level=info msg="Creating container: kube-system/kindnet-c9wzk/kindnet-cni" id=3fbd6773-d767-40e2-8bef-1eb4a18dc6cf name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:36:44 newest-cni-052144 crio[611]: time="2025-10-25T09:36:44.833187881Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:36:44 newest-cni-052144 crio[611]: time="2025-10-25T09:36:44.850362331Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:36:44 newest-cni-052144 crio[611]: time="2025-10-25T09:36:44.85097577Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:36:44 newest-cni-052144 crio[611]: time="2025-10-25T09:36:44.853787601Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:36:44 newest-cni-052144 crio[611]: time="2025-10-25T09:36:44.857607879Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:36:44 newest-cni-052144 crio[611]: time="2025-10-25T09:36:44.882919324Z" level=info msg="Created container 1be8c9f0f660b7b829177a15ea343cd85eab19806d6a017c49f04c41ee9c815f: kube-system/kindnet-c9wzk/kindnet-cni" id=3fbd6773-d767-40e2-8bef-1eb4a18dc6cf name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:36:44 newest-cni-052144 crio[611]: time="2025-10-25T09:36:44.883719292Z" level=info msg="Starting container: 1be8c9f0f660b7b829177a15ea343cd85eab19806d6a017c49f04c41ee9c815f" id=5f3129b0-eeb7-4ad3-b7b6-377b3629c5e0 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:36:44 newest-cni-052144 crio[611]: time="2025-10-25T09:36:44.888738304Z" level=info msg="Started container" PID=1059 containerID=1be8c9f0f660b7b829177a15ea343cd85eab19806d6a017c49f04c41ee9c815f description=kube-system/kindnet-c9wzk/kindnet-cni id=5f3129b0-eeb7-4ad3-b7b6-377b3629c5e0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f0aff115d7827dd9e38e0a74c2d3b5c0f432e092cdb2230f16e036aaa25799ab
	Oct 25 09:36:44 newest-cni-052144 crio[611]: time="2025-10-25T09:36:44.941034912Z" level=info msg="Created container 10b980ab399dae84d11c3f4759ec594c676a369d118101e64cee4812c3180c5a: kube-system/kube-proxy-wh72x/kube-proxy" id=d93c363e-c6db-4ea7-9117-3e9cfe939915 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:36:44 newest-cni-052144 crio[611]: time="2025-10-25T09:36:44.942066678Z" level=info msg="Starting container: 10b980ab399dae84d11c3f4759ec594c676a369d118101e64cee4812c3180c5a" id=69e33451-234e-43af-85e8-785737d23100 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:36:44 newest-cni-052144 crio[611]: time="2025-10-25T09:36:44.975906878Z" level=info msg="Started container" PID=1066 containerID=10b980ab399dae84d11c3f4759ec594c676a369d118101e64cee4812c3180c5a description=kube-system/kube-proxy-wh72x/kube-proxy id=69e33451-234e-43af-85e8-785737d23100 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f266bdc37c6d74e1798f63af0442262fcbdaadc0084d69a27f52e166e28f80dd
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	10b980ab399da       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   8 seconds ago       Running             kube-proxy                1                   f266bdc37c6d7       kube-proxy-wh72x                            kube-system
	1be8c9f0f660b       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   8 seconds ago       Running             kindnet-cni               1                   f0aff115d7827       kindnet-c9wzk                               kube-system
	d7070563a28b2       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   13 seconds ago      Running             kube-scheduler            1                   a897aa8c45d7e       kube-scheduler-newest-cni-052144            kube-system
	22b38a011d608       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   14 seconds ago      Running             kube-controller-manager   1                   2e4f7e987b286       kube-controller-manager-newest-cni-052144   kube-system
	5ae99446ae6ae       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   14 seconds ago      Running             etcd                      1                   2c03444d7cda0       etcd-newest-cni-052144                      kube-system
	1debff741ebda       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   14 seconds ago      Running             kube-apiserver            1                   7ab135d24da74       kube-apiserver-newest-cni-052144            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-052144
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-052144
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3de5ce1075e776fa55a26fe8396669cc53a4373
	                    minikube.k8s.io/name=newest-cni-052144
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_36_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:36:18 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-052144
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:36:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:36:44 +0000   Sat, 25 Oct 2025 09:36:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:36:44 +0000   Sat, 25 Oct 2025 09:36:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:36:44 +0000   Sat, 25 Oct 2025 09:36:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 25 Oct 2025 09:36:44 +0000   Sat, 25 Oct 2025 09:36:14 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-052144
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                520ddce7-f680-41ee-9a5a-5efd431b826c
	  Boot ID:                    89aa6167-e700-462e-8969-7739a9f33dc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-052144                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         32s
	  kube-system                 kindnet-c9wzk                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-newest-cni-052144             250m (12%)    0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-newest-cni-052144    200m (10%)    0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-wh72x                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-newest-cni-052144             100m (5%)     0 (0%)      0 (0%)           0 (0%)         32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 25s                kube-proxy       
	  Normal   Starting                 7s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  39s (x8 over 39s)  kubelet          Node newest-cni-052144 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    39s (x8 over 39s)  kubelet          Node newest-cni-052144 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     39s (x8 over 39s)  kubelet          Node newest-cni-052144 status is now: NodeHasSufficientPID
	  Normal   Starting                 32s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 32s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  32s                kubelet          Node newest-cni-052144 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    32s                kubelet          Node newest-cni-052144 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     32s                kubelet          Node newest-cni-052144 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           28s                node-controller  Node newest-cni-052144 event: Registered Node newest-cni-052144 in Controller
	  Normal   RegisteredNode           6s                 node-controller  Node newest-cni-052144 event: Registered Node newest-cni-052144 in Controller
	
	
	==> dmesg <==
	[ +13.271347] overlayfs: idmapped layers are currently not supported
	[Oct25 09:14] overlayfs: idmapped layers are currently not supported
	[Oct25 09:15] overlayfs: idmapped layers are currently not supported
	[ +40.253798] overlayfs: idmapped layers are currently not supported
	[Oct25 09:16] overlayfs: idmapped layers are currently not supported
	[Oct25 09:17] overlayfs: idmapped layers are currently not supported
	[Oct25 09:18] overlayfs: idmapped layers are currently not supported
	[  +9.191019] hrtimer: interrupt took 13462946 ns
	[Oct25 09:20] overlayfs: idmapped layers are currently not supported
	[Oct25 09:22] overlayfs: idmapped layers are currently not supported
	[Oct25 09:23] overlayfs: idmapped layers are currently not supported
	[Oct25 09:24] overlayfs: idmapped layers are currently not supported
	[Oct25 09:28] overlayfs: idmapped layers are currently not supported
	[ +37.283444] overlayfs: idmapped layers are currently not supported
	[Oct25 09:29] overlayfs: idmapped layers are currently not supported
	[ +38.328802] overlayfs: idmapped layers are currently not supported
	[Oct25 09:30] overlayfs: idmapped layers are currently not supported
	[Oct25 09:31] overlayfs: idmapped layers are currently not supported
	[Oct25 09:32] overlayfs: idmapped layers are currently not supported
	[Oct25 09:33] overlayfs: idmapped layers are currently not supported
	[Oct25 09:34] overlayfs: idmapped layers are currently not supported
	[ +28.289706] overlayfs: idmapped layers are currently not supported
	[Oct25 09:35] overlayfs: idmapped layers are currently not supported
	[Oct25 09:36] overlayfs: idmapped layers are currently not supported
	[ +24.160248] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [5ae99446ae6aedbea3baa1c22e2f1ff0346551a5136113c7579f0e09d070e253] <==
	{"level":"warn","ts":"2025-10-25T09:36:42.757345Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:42.782498Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:42.799989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:42.848309Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:42.860917Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:42.868253Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:42.886751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:42.902726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:42.946758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:42.965127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:42.973161Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:42.994889Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:43.010941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:43.032105Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:43.057889Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:43.073749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:43.087863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:43.106832Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:43.141274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:43.173450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:43.206061Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:43.236155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:43.247012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:43.269736Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:36:43.340977Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48644","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:36:53 up  1:19,  0 user,  load average: 3.45, 3.73, 3.03
	Linux newest-cni-052144 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1be8c9f0f660b7b829177a15ea343cd85eab19806d6a017c49f04c41ee9c815f] <==
	I1025 09:36:45.018332       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 09:36:45.018601       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1025 09:36:45.018716       1 main.go:148] setting mtu 1500 for CNI 
	I1025 09:36:45.018728       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 09:36:45.018745       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T09:36:45Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 09:36:45.255789       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 09:36:45.255819       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 09:36:45.255830       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 09:36:45.256379       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [1debff741ebda89c6f5555bf50231cbd526f0d6d17047a2dfd254dad44fe064c] <==
	I1025 09:36:44.579498       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 09:36:44.596035       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1025 09:36:44.596135       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1025 09:36:44.596177       1 policy_source.go:240] refreshing policies
	I1025 09:36:44.596241       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1025 09:36:44.596435       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:36:44.599634       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1025 09:36:44.599977       1 aggregator.go:171] initial CRD sync complete...
	I1025 09:36:44.601160       1 autoregister_controller.go:144] Starting autoregister controller
	I1025 09:36:44.601225       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 09:36:44.601257       1 cache.go:39] Caches are synced for autoregister controller
	I1025 09:36:44.635850       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 09:36:44.656962       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	E1025 09:36:44.727597       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1025 09:36:45.097443       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 09:36:45.451606       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 09:36:45.536029       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 09:36:45.651294       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 09:36:45.691896       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 09:36:45.860749       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.90.171"}
	I1025 09:36:45.882823       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.238.122"}
	I1025 09:36:48.051214       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 09:36:48.150820       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 09:36:48.308292       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1025 09:36:48.352746       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [22b38a011d6083a5d52b1656049438d0d0df32d5b7a4981c40343c7ca6b279c4] <==
	I1025 09:36:47.793746       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1025 09:36:47.794864       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1025 09:36:47.795238       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1025 09:36:47.795273       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1025 09:36:47.795302       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1025 09:36:47.799855       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1025 09:36:47.799923       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1025 09:36:47.802341       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1025 09:36:47.802906       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1025 09:36:47.805050       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1025 09:36:47.809651       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1025 09:36:47.810052       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1025 09:36:47.817958       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1025 09:36:47.821194       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:36:47.839695       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1025 09:36:47.844530       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1025 09:36:47.846871       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:36:47.846893       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 09:36:47.846901       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 09:36:47.848955       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1025 09:36:47.850169       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:36:47.850248       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:36:47.850318       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1025 09:36:47.865265       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1025 09:36:47.905531       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	
	
	==> kube-proxy [10b980ab399dae84d11c3f4759ec594c676a369d118101e64cee4812c3180c5a] <==
	I1025 09:36:45.096546       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:36:45.352560       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:36:45.552220       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:36:45.552343       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1025 09:36:45.552452       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:36:45.799878       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:36:45.799999       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:36:45.833520       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:36:45.834175       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:36:45.834249       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:36:45.837616       1 config.go:200] "Starting service config controller"
	I1025 09:36:45.837704       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:36:45.837763       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:36:45.837790       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:36:45.837843       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:36:45.837871       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:36:45.838818       1 config.go:309] "Starting node config controller"
	I1025 09:36:45.838892       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:36:45.838922       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:36:45.938029       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 09:36:45.938063       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 09:36:45.938092       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [d7070563a28b2ab73806b945b9883656a057f27f61629f911a4ed809c987d519] <==
	I1025 09:36:42.644297       1 serving.go:386] Generated self-signed cert in-memory
	W1025 09:36:44.241364       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 09:36:44.241396       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 09:36:44.241409       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 09:36:44.241416       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 09:36:44.409878       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 09:36:44.409918       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:36:44.413308       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 09:36:44.426164       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:36:44.427988       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:36:44.428029       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 09:36:44.631815       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 09:36:41 newest-cni-052144 kubelet[727]: E1025 09:36:41.534780     727 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-052144\" not found" node="newest-cni-052144"
	Oct 25 09:36:42 newest-cni-052144 kubelet[727]: E1025 09:36:42.380374     727 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-052144\" not found" node="newest-cni-052144"
	Oct 25 09:36:44 newest-cni-052144 kubelet[727]: I1025 09:36:44.280870     727 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-052144"
	Oct 25 09:36:44 newest-cni-052144 kubelet[727]: I1025 09:36:44.349656     727 apiserver.go:52] "Watching apiserver"
	Oct 25 09:36:44 newest-cni-052144 kubelet[727]: I1025 09:36:44.480685     727 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 25 09:36:44 newest-cni-052144 kubelet[727]: I1025 09:36:44.487231     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/cf7b8b45-3b46-4a97-8c27-2eca0f408738-cni-cfg\") pod \"kindnet-c9wzk\" (UID: \"cf7b8b45-3b46-4a97-8c27-2eca0f408738\") " pod="kube-system/kindnet-c9wzk"
	Oct 25 09:36:44 newest-cni-052144 kubelet[727]: I1025 09:36:44.487504     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cf7b8b45-3b46-4a97-8c27-2eca0f408738-lib-modules\") pod \"kindnet-c9wzk\" (UID: \"cf7b8b45-3b46-4a97-8c27-2eca0f408738\") " pod="kube-system/kindnet-c9wzk"
	Oct 25 09:36:44 newest-cni-052144 kubelet[727]: I1025 09:36:44.487619     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e3f00316-8d1f-4dd3-ad3b-7b973e951dc3-lib-modules\") pod \"kube-proxy-wh72x\" (UID: \"e3f00316-8d1f-4dd3-ad3b-7b973e951dc3\") " pod="kube-system/kube-proxy-wh72x"
	Oct 25 09:36:44 newest-cni-052144 kubelet[727]: I1025 09:36:44.487715     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cf7b8b45-3b46-4a97-8c27-2eca0f408738-xtables-lock\") pod \"kindnet-c9wzk\" (UID: \"cf7b8b45-3b46-4a97-8c27-2eca0f408738\") " pod="kube-system/kindnet-c9wzk"
	Oct 25 09:36:44 newest-cni-052144 kubelet[727]: I1025 09:36:44.487829     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e3f00316-8d1f-4dd3-ad3b-7b973e951dc3-xtables-lock\") pod \"kube-proxy-wh72x\" (UID: \"e3f00316-8d1f-4dd3-ad3b-7b973e951dc3\") " pod="kube-system/kube-proxy-wh72x"
	Oct 25 09:36:44 newest-cni-052144 kubelet[727]: I1025 09:36:44.698135     727 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 25 09:36:44 newest-cni-052144 kubelet[727]: I1025 09:36:44.728907     727 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-052144"
	Oct 25 09:36:44 newest-cni-052144 kubelet[727]: I1025 09:36:44.729006     727 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-052144"
	Oct 25 09:36:44 newest-cni-052144 kubelet[727]: I1025 09:36:44.729035     727 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 25 09:36:44 newest-cni-052144 kubelet[727]: E1025 09:36:44.729259     727 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-052144\" already exists" pod="kube-system/etcd-newest-cni-052144"
	Oct 25 09:36:44 newest-cni-052144 kubelet[727]: I1025 09:36:44.729275     727 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-052144"
	Oct 25 09:36:44 newest-cni-052144 kubelet[727]: I1025 09:36:44.735608     727 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 25 09:36:44 newest-cni-052144 kubelet[727]: E1025 09:36:44.767229     727 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-052144\" already exists" pod="kube-system/kube-apiserver-newest-cni-052144"
	Oct 25 09:36:44 newest-cni-052144 kubelet[727]: I1025 09:36:44.773566     727 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-052144"
	Oct 25 09:36:44 newest-cni-052144 kubelet[727]: E1025 09:36:44.827579     727 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-052144\" already exists" pod="kube-system/kube-controller-manager-newest-cni-052144"
	Oct 25 09:36:44 newest-cni-052144 kubelet[727]: I1025 09:36:44.827616     727 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-052144"
	Oct 25 09:36:44 newest-cni-052144 kubelet[727]: E1025 09:36:44.839922     727 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-052144\" already exists" pod="kube-system/kube-scheduler-newest-cni-052144"
	Oct 25 09:36:47 newest-cni-052144 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 09:36:47 newest-cni-052144 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 09:36:47 newest-cni-052144 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-052144 -n newest-cni-052144
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-052144 -n newest-cni-052144: exit status 2 (418.022746ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-052144 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-whxdx storage-provisioner dashboard-metrics-scraper-6ffb444bf9-blbcx kubernetes-dashboard-855c9754f9-g7c5b
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-052144 describe pod coredns-66bc5c9577-whxdx storage-provisioner dashboard-metrics-scraper-6ffb444bf9-blbcx kubernetes-dashboard-855c9754f9-g7c5b
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-052144 describe pod coredns-66bc5c9577-whxdx storage-provisioner dashboard-metrics-scraper-6ffb444bf9-blbcx kubernetes-dashboard-855c9754f9-g7c5b: exit status 1 (97.388936ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-whxdx" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-blbcx" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-g7c5b" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-052144 describe pod coredns-66bc5c9577-whxdx storage-provisioner dashboard-metrics-scraper-6ffb444bf9-blbcx kubernetes-dashboard-855c9754f9-g7c5b: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (7.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-666079 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-666079 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (308.936369ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:36:50Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-666079 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-666079 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-666079 describe deploy/metrics-server -n kube-system: exit status 1 (110.137507ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-666079 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-666079
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-666079:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "957d2a4135a8dcb0784416c8d5211d69b209106d0d8dfa7985ab81c8fdcf8862",
	        "Created": "2025-10-25T09:35:22.279167682Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 204398,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:35:22.348966519Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/957d2a4135a8dcb0784416c8d5211d69b209106d0d8dfa7985ab81c8fdcf8862/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/957d2a4135a8dcb0784416c8d5211d69b209106d0d8dfa7985ab81c8fdcf8862/hostname",
	        "HostsPath": "/var/lib/docker/containers/957d2a4135a8dcb0784416c8d5211d69b209106d0d8dfa7985ab81c8fdcf8862/hosts",
	        "LogPath": "/var/lib/docker/containers/957d2a4135a8dcb0784416c8d5211d69b209106d0d8dfa7985ab81c8fdcf8862/957d2a4135a8dcb0784416c8d5211d69b209106d0d8dfa7985ab81c8fdcf8862-json.log",
	        "Name": "/default-k8s-diff-port-666079",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-666079:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-666079",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "957d2a4135a8dcb0784416c8d5211d69b209106d0d8dfa7985ab81c8fdcf8862",
	                "LowerDir": "/var/lib/docker/overlay2/5a2d5db135a98df69094a4c9e2b07f83f6518ae35a00f1aa521570af97c1888e-init/diff:/var/lib/docker/overlay2/cef79fc16ca3e257f9c9390d34fae091a7f7fa219913b2606caf1922ea50ed93/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5a2d5db135a98df69094a4c9e2b07f83f6518ae35a00f1aa521570af97c1888e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5a2d5db135a98df69094a4c9e2b07f83f6518ae35a00f1aa521570af97c1888e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5a2d5db135a98df69094a4c9e2b07f83f6518ae35a00f1aa521570af97c1888e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-666079",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-666079/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-666079",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-666079",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-666079",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c583c00ad902dcaa1867d392eb519e5742f67ed1022fb31ad38a9c95b68cc033",
	            "SandboxKey": "/var/run/docker/netns/c583c00ad902",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-666079": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9e:13:bc:e0:2d:b3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fca20c11b6d784ec6e97d5309475016004c54db4ff0e1ebce1147f0efda81f09",
	                    "EndpointID": "2a8cd6b834960ad92a352663396679b1348d960aaa988624aa4d9f1aa8612842",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-666079",
	                        "957d2a4135a8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-666079 -n default-k8s-diff-port-666079
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-666079 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-666079 logs -n 25: (1.676672741s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p no-preload-179869 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-179869            │ jenkins │ v1.37.0 │ 25 Oct 25 09:33 UTC │ 25 Oct 25 09:34 UTC │
	│ addons  │ enable dashboard -p no-preload-179869 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-179869            │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │ 25 Oct 25 09:34 UTC │
	│ start   │ -p no-preload-179869 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-179869            │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │ 25 Oct 25 09:34 UTC │
	│ addons  │ enable metrics-server -p embed-certs-173264 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-173264           │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │                     │
	│ stop    │ -p embed-certs-173264 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-173264           │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │ 25 Oct 25 09:34 UTC │
	│ addons  │ enable dashboard -p embed-certs-173264 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-173264           │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │ 25 Oct 25 09:34 UTC │
	│ start   │ -p embed-certs-173264 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-173264           │ jenkins │ v1.37.0 │ 25 Oct 25 09:34 UTC │ 25 Oct 25 09:35 UTC │
	│ image   │ no-preload-179869 image list --format=json                                                                                                                                                                                                    │ no-preload-179869            │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
	│ pause   │ -p no-preload-179869 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-179869            │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │                     │
	│ delete  │ -p no-preload-179869                                                                                                                                                                                                                          │ no-preload-179869            │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
	│ delete  │ -p no-preload-179869                                                                                                                                                                                                                          │ no-preload-179869            │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
	│ delete  │ -p disable-driver-mounts-901717                                                                                                                                                                                                               │ disable-driver-mounts-901717 │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
	│ start   │ -p default-k8s-diff-port-666079 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-666079 │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:36 UTC │
	│ image   │ embed-certs-173264 image list --format=json                                                                                                                                                                                                   │ embed-certs-173264           │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
	│ pause   │ -p embed-certs-173264 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-173264           │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │                     │
	│ delete  │ -p embed-certs-173264                                                                                                                                                                                                                         │ embed-certs-173264           │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
	│ delete  │ -p embed-certs-173264                                                                                                                                                                                                                         │ embed-certs-173264           │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
	│ start   │ -p newest-cni-052144 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-052144            │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:36 UTC │
	│ addons  │ enable metrics-server -p newest-cni-052144 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-052144            │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │                     │
	│ stop    │ -p newest-cni-052144 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-052144            │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │ 25 Oct 25 09:36 UTC │
	│ addons  │ enable dashboard -p newest-cni-052144 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-052144            │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │ 25 Oct 25 09:36 UTC │
	│ start   │ -p newest-cni-052144 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-052144            │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │ 25 Oct 25 09:36 UTC │
	│ image   │ newest-cni-052144 image list --format=json                                                                                                                                                                                                    │ newest-cni-052144            │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │ 25 Oct 25 09:36 UTC │
	│ pause   │ -p newest-cni-052144 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-052144            │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-666079 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-666079 │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:36:31
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:36:31.103350  210957 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:36:31.103484  210957 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:36:31.103500  210957 out.go:374] Setting ErrFile to fd 2...
	I1025 09:36:31.103505  210957 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:36:31.103778  210957 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-2312/.minikube/bin
	I1025 09:36:31.104229  210957 out.go:368] Setting JSON to false
	I1025 09:36:31.105171  210957 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4742,"bootTime":1761380249,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1025 09:36:31.105252  210957 start.go:141] virtualization:  
	I1025 09:36:31.108611  210957 out.go:179] * [newest-cni-052144] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 09:36:31.112744  210957 out.go:179]   - MINIKUBE_LOCATION=21796
	I1025 09:36:31.112792  210957 notify.go:220] Checking for updates...
	I1025 09:36:31.119064  210957 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:36:31.122069  210957 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21796-2312/kubeconfig
	I1025 09:36:31.125089  210957 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-2312/.minikube
	I1025 09:36:31.128012  210957 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 09:36:31.131086  210957 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:36:31.134588  210957 config.go:182] Loaded profile config "newest-cni-052144": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:36:31.135180  210957 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:36:31.168323  210957 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 09:36:31.168452  210957 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:36:31.230075  210957 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 09:36:31.219630379 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:36:31.230230  210957 docker.go:318] overlay module found
	I1025 09:36:31.235280  210957 out.go:179] * Using the docker driver based on existing profile
	I1025 09:36:31.238112  210957 start.go:305] selected driver: docker
	I1025 09:36:31.238132  210957 start.go:925] validating driver "docker" against &{Name:newest-cni-052144 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-052144 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:36:31.238245  210957 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:36:31.238951  210957 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:36:31.303016  210957 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 09:36:31.293102137 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:36:31.303358  210957 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1025 09:36:31.303392  210957 cni.go:84] Creating CNI manager for ""
	I1025 09:36:31.303447  210957 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:36:31.303485  210957 start.go:349] cluster config:
	{Name:newest-cni-052144 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-052144 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:36:31.306604  210957 out.go:179] * Starting "newest-cni-052144" primary control-plane node in "newest-cni-052144" cluster
	I1025 09:36:31.309469  210957 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 09:36:31.312459  210957 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:36:31.315299  210957 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:36:31.315358  210957 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21796-2312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1025 09:36:31.315370  210957 cache.go:58] Caching tarball of preloaded images
	I1025 09:36:31.315469  210957 preload.go:233] Found /home/jenkins/minikube-integration/21796-2312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 09:36:31.315493  210957 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 09:36:31.315604  210957 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/newest-cni-052144/config.json ...
	I1025 09:36:31.315827  210957 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:36:31.335097  210957 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 09:36:31.335120  210957 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 09:36:31.335139  210957 cache.go:232] Successfully downloaded all kic artifacts
	I1025 09:36:31.335162  210957 start.go:360] acquireMachinesLock for newest-cni-052144: {Name:mkdc11ad68e6ad5dad60c6abaa6ced1c93cec008 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:36:31.335220  210957 start.go:364] duration metric: took 35.906µs to acquireMachinesLock for "newest-cni-052144"
	I1025 09:36:31.335243  210957 start.go:96] Skipping create...Using existing machine configuration
	I1025 09:36:31.335249  210957 fix.go:54] fixHost starting: 
	I1025 09:36:31.335521  210957 cli_runner.go:164] Run: docker container inspect newest-cni-052144 --format={{.State.Status}}
	I1025 09:36:31.367229  210957 fix.go:112] recreateIfNeeded on newest-cni-052144: state=Stopped err=<nil>
	W1025 09:36:31.367257  210957 fix.go:138] unexpected machine state, will restart: <nil>
	W1025 09:36:28.850785  203993 node_ready.go:57] node "default-k8s-diff-port-666079" has "Ready":"False" status (will retry)
	W1025 09:36:30.852113  203993 node_ready.go:57] node "default-k8s-diff-port-666079" has "Ready":"False" status (will retry)
	I1025 09:36:31.370532  210957 out.go:252] * Restarting existing docker container for "newest-cni-052144" ...
	I1025 09:36:31.370613  210957 cli_runner.go:164] Run: docker start newest-cni-052144
	I1025 09:36:31.623230  210957 cli_runner.go:164] Run: docker container inspect newest-cni-052144 --format={{.State.Status}}
	I1025 09:36:31.646378  210957 kic.go:430] container "newest-cni-052144" state is running.
	I1025 09:36:31.646782  210957 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-052144
	I1025 09:36:31.670310  210957 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/newest-cni-052144/config.json ...
	I1025 09:36:31.670531  210957 machine.go:93] provisionDockerMachine start ...
	I1025 09:36:31.670592  210957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-052144
	I1025 09:36:31.691579  210957 main.go:141] libmachine: Using SSH client type: native
	I1025 09:36:31.691903  210957 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1025 09:36:31.691912  210957 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:36:31.692551  210957 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1025 09:36:34.845762  210957 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-052144
	
	I1025 09:36:34.845796  210957 ubuntu.go:182] provisioning hostname "newest-cni-052144"
	I1025 09:36:34.845857  210957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-052144
	I1025 09:36:34.868440  210957 main.go:141] libmachine: Using SSH client type: native
	I1025 09:36:34.868747  210957 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1025 09:36:34.868766  210957 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-052144 && echo "newest-cni-052144" | sudo tee /etc/hostname
	I1025 09:36:35.040716  210957 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-052144
	
	I1025 09:36:35.040795  210957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-052144
	I1025 09:36:35.059243  210957 main.go:141] libmachine: Using SSH client type: native
	I1025 09:36:35.059548  210957 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1025 09:36:35.059571  210957 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-052144' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-052144/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-052144' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:36:35.214347  210957 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:36:35.214383  210957 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21796-2312/.minikube CaCertPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21796-2312/.minikube}
	I1025 09:36:35.214417  210957 ubuntu.go:190] setting up certificates
	I1025 09:36:35.214433  210957 provision.go:84] configureAuth start
	I1025 09:36:35.214503  210957 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-052144
	I1025 09:36:35.232141  210957 provision.go:143] copyHostCerts
	I1025 09:36:35.232217  210957 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-2312/.minikube/key.pem, removing ...
	I1025 09:36:35.232237  210957 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-2312/.minikube/key.pem
	I1025 09:36:35.232323  210957 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21796-2312/.minikube/key.pem (1679 bytes)
	I1025 09:36:35.232434  210957 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-2312/.minikube/ca.pem, removing ...
	I1025 09:36:35.232445  210957 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-2312/.minikube/ca.pem
	I1025 09:36:35.232473  210957 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21796-2312/.minikube/ca.pem (1082 bytes)
	I1025 09:36:35.232541  210957 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-2312/.minikube/cert.pem, removing ...
	I1025 09:36:35.232551  210957 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-2312/.minikube/cert.pem
	I1025 09:36:35.232576  210957 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21796-2312/.minikube/cert.pem (1123 bytes)
	I1025 09:36:35.232640  210957 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21796-2312/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca-key.pem org=jenkins.newest-cni-052144 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-052144]
	I1025 09:36:35.642545  210957 provision.go:177] copyRemoteCerts
	I1025 09:36:35.642620  210957 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:36:35.642659  210957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-052144
	I1025 09:36:35.660797  210957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/newest-cni-052144/id_rsa Username:docker}
	I1025 09:36:35.769871  210957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 09:36:35.787154  210957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1025 09:36:35.805460  210957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 09:36:35.822647  210957 provision.go:87] duration metric: took 608.197056ms to configureAuth
	I1025 09:36:35.822672  210957 ubuntu.go:206] setting minikube options for container-runtime
	I1025 09:36:35.822881  210957 config.go:182] Loaded profile config "newest-cni-052144": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:36:35.822988  210957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-052144
	I1025 09:36:35.841876  210957 main.go:141] libmachine: Using SSH client type: native
	I1025 09:36:35.842219  210957 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1025 09:36:35.842239  210957 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1025 09:36:33.350398  203993 node_ready.go:57] node "default-k8s-diff-port-666079" has "Ready":"False" status (will retry)
	W1025 09:36:35.350727  203993 node_ready.go:57] node "default-k8s-diff-port-666079" has "Ready":"False" status (will retry)
	I1025 09:36:36.164807  210957 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:36:36.164880  210957 machine.go:96] duration metric: took 4.494332204s to provisionDockerMachine
	I1025 09:36:36.164909  210957 start.go:293] postStartSetup for "newest-cni-052144" (driver="docker")
	I1025 09:36:36.164950  210957 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:36:36.165037  210957 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:36:36.165128  210957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-052144
	I1025 09:36:36.183620  210957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/newest-cni-052144/id_rsa Username:docker}
	I1025 09:36:36.290102  210957 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:36:36.293429  210957 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 09:36:36.293457  210957 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 09:36:36.293469  210957 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-2312/.minikube/addons for local assets ...
	I1025 09:36:36.293524  210957 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-2312/.minikube/files for local assets ...
	I1025 09:36:36.293604  210957 filesync.go:149] local asset: /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem -> 41102.pem in /etc/ssl/certs
	I1025 09:36:36.293707  210957 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 09:36:36.301851  210957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem --> /etc/ssl/certs/41102.pem (1708 bytes)
	I1025 09:36:36.319954  210957 start.go:296] duration metric: took 155.015654ms for postStartSetup
	I1025 09:36:36.320048  210957 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:36:36.320090  210957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-052144
	I1025 09:36:36.337184  210957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/newest-cni-052144/id_rsa Username:docker}
	I1025 09:36:36.438989  210957 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 09:36:36.443593  210957 fix.go:56] duration metric: took 5.10833821s for fixHost
	I1025 09:36:36.443614  210957 start.go:83] releasing machines lock for "newest-cni-052144", held for 5.108382224s
	I1025 09:36:36.443680  210957 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-052144
	I1025 09:36:36.460965  210957 ssh_runner.go:195] Run: cat /version.json
	I1025 09:36:36.461014  210957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-052144
	I1025 09:36:36.461289  210957 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:36:36.461362  210957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-052144
	I1025 09:36:36.486335  210957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/newest-cni-052144/id_rsa Username:docker}
	I1025 09:36:36.487663  210957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/newest-cni-052144/id_rsa Username:docker}
	I1025 09:36:36.589856  210957 ssh_runner.go:195] Run: systemctl --version
	I1025 09:36:36.712376  210957 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:36:36.749326  210957 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:36:36.754089  210957 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:36:36.754168  210957 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:36:36.762864  210957 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 09:36:36.762893  210957 start.go:495] detecting cgroup driver to use...
	I1025 09:36:36.762925  210957 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 09:36:36.762975  210957 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:36:36.778638  210957 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:36:36.791393  210957 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:36:36.791502  210957 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:36:36.807485  210957 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:36:36.822700  210957 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:36:36.951668  210957 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:36:37.072707  210957 docker.go:234] disabling docker service ...
	I1025 09:36:37.072777  210957 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:36:37.088287  210957 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:36:37.101317  210957 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:36:37.225709  210957 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:36:37.352724  210957 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:36:37.367462  210957 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:36:37.384526  210957 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 09:36:37.384631  210957 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:36:37.393867  210957 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 09:36:37.393947  210957 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:36:37.403185  210957 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:36:37.412322  210957 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:36:37.421215  210957 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:36:37.432241  210957 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:36:37.441129  210957 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:36:37.449725  210957 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:36:37.459065  210957 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:36:37.466851  210957 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:36:37.474237  210957 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:36:37.583290  210957 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:36:37.715219  210957 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:36:37.715299  210957 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:36:37.719459  210957 start.go:563] Will wait 60s for crictl version
	I1025 09:36:37.719564  210957 ssh_runner.go:195] Run: which crictl
	I1025 09:36:37.723333  210957 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 09:36:37.751547  210957 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 09:36:37.751637  210957 ssh_runner.go:195] Run: crio --version
	I1025 09:36:37.779261  210957 ssh_runner.go:195] Run: crio --version
	I1025 09:36:37.811652  210957 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 09:36:37.814623  210957 cli_runner.go:164] Run: docker network inspect newest-cni-052144 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:36:37.838994  210957 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1025 09:36:37.844404  210957 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:36:37.859012  210957 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1025 09:36:37.861855  210957 kubeadm.go:883] updating cluster {Name:newest-cni-052144 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-052144 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 09:36:37.862078  210957 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:36:37.862159  210957 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:36:37.897970  210957 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:36:37.898021  210957 crio.go:433] Images already preloaded, skipping extraction
	I1025 09:36:37.898078  210957 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:36:37.928661  210957 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:36:37.928685  210957 cache_images.go:85] Images are preloaded, skipping loading
	I1025 09:36:37.928693  210957 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1025 09:36:37.928793  210957 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-052144 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-052144 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 09:36:37.928892  210957 ssh_runner.go:195] Run: crio config
	I1025 09:36:37.999333  210957 cni.go:84] Creating CNI manager for ""
	I1025 09:36:37.999360  210957 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:36:37.999387  210957 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1025 09:36:37.999415  210957 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-052144 NodeName:newest-cni-052144 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:36:37.999586  210957 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-052144"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:36:37.999669  210957 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 09:36:38.009877  210957 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 09:36:38.010009  210957 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:36:38.019074  210957 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1025 09:36:38.035330  210957 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:36:38.053906  210957 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1025 09:36:38.069747  210957 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1025 09:36:38.074634  210957 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:36:38.087360  210957 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:36:38.222703  210957 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:36:38.260299  210957 certs.go:69] Setting up /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/newest-cni-052144 for IP: 192.168.85.2
	I1025 09:36:38.260321  210957 certs.go:195] generating shared ca certs ...
	I1025 09:36:38.260336  210957 certs.go:227] acquiring lock for ca certs: {Name:mke81a91ad41248e67f7acba9fb2e71e1c110e7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:36:38.260469  210957 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21796-2312/.minikube/ca.key
	I1025 09:36:38.260515  210957 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.key
	I1025 09:36:38.260527  210957 certs.go:257] generating profile certs ...
	I1025 09:36:38.260607  210957 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/newest-cni-052144/client.key
	I1025 09:36:38.260685  210957 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/newest-cni-052144/apiserver.key.45317619
	I1025 09:36:38.260735  210957 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/newest-cni-052144/proxy-client.key
	I1025 09:36:38.260859  210957 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/4110.pem (1338 bytes)
	W1025 09:36:38.260899  210957 certs.go:480] ignoring /home/jenkins/minikube-integration/21796-2312/.minikube/certs/4110_empty.pem, impossibly tiny 0 bytes
	I1025 09:36:38.260908  210957 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 09:36:38.260938  210957 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem (1082 bytes)
	I1025 09:36:38.260965  210957 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:36:38.260992  210957 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/key.pem (1679 bytes)
	I1025 09:36:38.261040  210957 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem (1708 bytes)
	I1025 09:36:38.261675  210957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:36:38.288512  210957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 09:36:38.312796  210957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:36:38.335802  210957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 09:36:38.403349  210957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/newest-cni-052144/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1025 09:36:38.479137  210957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/newest-cni-052144/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 09:36:38.528489  210957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/newest-cni-052144/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:36:38.556505  210957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/newest-cni-052144/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1025 09:36:38.584695  210957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:36:38.617414  210957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/certs/4110.pem --> /usr/share/ca-certificates/4110.pem (1338 bytes)
	I1025 09:36:38.640238  210957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem --> /usr/share/ca-certificates/41102.pem (1708 bytes)
	I1025 09:36:38.660694  210957 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:36:38.689281  210957 ssh_runner.go:195] Run: openssl version
	I1025 09:36:38.696257  210957 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:36:38.711246  210957 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:36:38.715437  210957 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:36:38.715556  210957 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:36:38.760860  210957 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:36:38.769703  210957 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4110.pem && ln -fs /usr/share/ca-certificates/4110.pem /etc/ssl/certs/4110.pem"
	I1025 09:36:38.779399  210957 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4110.pem
	I1025 09:36:38.783412  210957 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 08:37 /usr/share/ca-certificates/4110.pem
	I1025 09:36:38.783525  210957 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4110.pem
	I1025 09:36:38.824954  210957 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4110.pem /etc/ssl/certs/51391683.0"
	I1025 09:36:38.833108  210957 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41102.pem && ln -fs /usr/share/ca-certificates/41102.pem /etc/ssl/certs/41102.pem"
	I1025 09:36:38.841353  210957 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41102.pem
	I1025 09:36:38.845398  210957 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 08:37 /usr/share/ca-certificates/41102.pem
	I1025 09:36:38.845516  210957 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41102.pem
	I1025 09:36:38.895744  210957 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41102.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 09:36:38.905166  210957 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:36:38.909610  210957 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 09:36:38.959088  210957 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 09:36:39.062861  210957 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 09:36:39.144081  210957 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 09:36:39.217495  210957 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 09:36:39.283846  210957 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 09:36:39.330138  210957 kubeadm.go:400] StartCluster: {Name:newest-cni-052144 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-052144 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:36:39.330239  210957 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:36:39.330303  210957 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:36:39.360521  210957 cri.go:89] found id: "d7070563a28b2ab73806b945b9883656a057f27f61629f911a4ed809c987d519"
	I1025 09:36:39.360545  210957 cri.go:89] found id: "22b38a011d6083a5d52b1656049438d0d0df32d5b7a4981c40343c7ca6b279c4"
	I1025 09:36:39.360550  210957 cri.go:89] found id: "5ae99446ae6aedbea3baa1c22e2f1ff0346551a5136113c7579f0e09d070e253"
	I1025 09:36:39.360554  210957 cri.go:89] found id: "1debff741ebda89c6f5555bf50231cbd526f0d6d17047a2dfd254dad44fe064c"
	I1025 09:36:39.360557  210957 cri.go:89] found id: ""
	I1025 09:36:39.360612  210957 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 09:36:39.371278  210957 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:36:39Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:36:39.371368  210957 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 09:36:39.379807  210957 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 09:36:39.379836  210957 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 09:36:39.379886  210957 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 09:36:39.398560  210957 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 09:36:39.399157  210957 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-052144" does not appear in /home/jenkins/minikube-integration/21796-2312/kubeconfig
	I1025 09:36:39.399411  210957 kubeconfig.go:62] /home/jenkins/minikube-integration/21796-2312/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-052144" cluster setting kubeconfig missing "newest-cni-052144" context setting]
	I1025 09:36:39.399871  210957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/kubeconfig: {Name:mk29749a0edbb941c188588ea6aef25a517ab571 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:36:39.401189  210957 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 09:36:39.408844  210957 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1025 09:36:39.408877  210957 kubeadm.go:601] duration metric: took 29.034073ms to restartPrimaryControlPlane
	I1025 09:36:39.408896  210957 kubeadm.go:402] duration metric: took 78.767348ms to StartCluster
	I1025 09:36:39.408911  210957 settings.go:142] acquiring lock: {Name:mk0d8a31751f5e359928da7d7271367bd4b397fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:36:39.408971  210957 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21796-2312/kubeconfig
	I1025 09:36:39.409931  210957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/kubeconfig: {Name:mk29749a0edbb941c188588ea6aef25a517ab571 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:36:39.410232  210957 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:36:39.410606  210957 config.go:182] Loaded profile config "newest-cni-052144": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:36:39.410639  210957 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 09:36:39.410737  210957 addons.go:69] Setting dashboard=true in profile "newest-cni-052144"
	I1025 09:36:39.410746  210957 addons.go:69] Setting default-storageclass=true in profile "newest-cni-052144"
	I1025 09:36:39.410751  210957 addons.go:238] Setting addon dashboard=true in "newest-cni-052144"
	I1025 09:36:39.410757  210957 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-052144"
	I1025 09:36:39.410737  210957 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-052144"
	I1025 09:36:39.410775  210957 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-052144"
	W1025 09:36:39.410781  210957 addons.go:247] addon storage-provisioner should already be in state true
	I1025 09:36:39.410803  210957 host.go:66] Checking if "newest-cni-052144" exists ...
	I1025 09:36:39.411063  210957 cli_runner.go:164] Run: docker container inspect newest-cni-052144 --format={{.State.Status}}
	I1025 09:36:39.411245  210957 cli_runner.go:164] Run: docker container inspect newest-cni-052144 --format={{.State.Status}}
	W1025 09:36:39.410758  210957 addons.go:247] addon dashboard should already be in state true
	I1025 09:36:39.412869  210957 host.go:66] Checking if "newest-cni-052144" exists ...
	I1025 09:36:39.413332  210957 cli_runner.go:164] Run: docker container inspect newest-cni-052144 --format={{.State.Status}}
	I1025 09:36:39.415691  210957 out.go:179] * Verifying Kubernetes components...
	I1025 09:36:39.418902  210957 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:36:39.466146  210957 addons.go:238] Setting addon default-storageclass=true in "newest-cni-052144"
	W1025 09:36:39.466169  210957 addons.go:247] addon default-storageclass should already be in state true
	I1025 09:36:39.466194  210957 host.go:66] Checking if "newest-cni-052144" exists ...
	I1025 09:36:39.466613  210957 cli_runner.go:164] Run: docker container inspect newest-cni-052144 --format={{.State.Status}}
	I1025 09:36:39.479623  210957 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 09:36:39.482511  210957 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:36:39.482532  210957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 09:36:39.482597  210957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-052144
	I1025 09:36:39.488583  210957 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1025 09:36:39.491528  210957 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1025 09:36:39.494347  210957 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1025 09:36:39.494374  210957 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1025 09:36:39.494439  210957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-052144
	I1025 09:36:39.520516  210957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/newest-cni-052144/id_rsa Username:docker}
	I1025 09:36:39.541484  210957 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 09:36:39.541511  210957 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 09:36:39.541576  210957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-052144
	I1025 09:36:39.551308  210957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/newest-cni-052144/id_rsa Username:docker}
	I1025 09:36:39.578133  210957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/newest-cni-052144/id_rsa Username:docker}
	I1025 09:36:39.855979  210957 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1025 09:36:39.856006  210957 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1025 09:36:39.876203  210957 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:36:39.894915  210957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:36:39.903983  210957 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:36:39.904069  210957 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:36:39.910391  210957 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1025 09:36:39.910415  210957 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1025 09:36:39.941444  210957 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1025 09:36:39.941469  210957 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1025 09:36:39.946533  210957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 09:36:40.008968  210957 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1025 09:36:40.009046  210957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1025 09:36:40.087612  210957 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1025 09:36:40.087684  210957 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1025 09:36:40.164783  210957 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1025 09:36:40.164861  210957 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1025 09:36:40.208822  210957 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1025 09:36:40.208924  210957 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1025 09:36:40.263093  210957 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1025 09:36:40.263157  210957 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1025 09:36:40.283423  210957 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 09:36:40.283494  210957 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1025 09:36:40.307151  210957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1025 09:36:37.850138  203993 node_ready.go:57] node "default-k8s-diff-port-666079" has "Ready":"False" status (will retry)
	I1025 09:36:38.351269  203993 node_ready.go:49] node "default-k8s-diff-port-666079" is "Ready"
	I1025 09:36:38.351301  203993 node_ready.go:38] duration metric: took 41.004470382s for node "default-k8s-diff-port-666079" to be "Ready" ...
	I1025 09:36:38.351315  203993 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:36:38.351372  203993 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:36:38.369823  203993 api_server.go:72] duration metric: took 42.30575869s to wait for apiserver process to appear ...
	I1025 09:36:38.369846  203993 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:36:38.369865  203993 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1025 09:36:38.384968  203993 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1025 09:36:38.386144  203993 api_server.go:141] control plane version: v1.34.1
	I1025 09:36:38.386170  203993 api_server.go:131] duration metric: took 16.314567ms to wait for apiserver health ...
	I1025 09:36:38.386179  203993 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 09:36:38.389526  203993 system_pods.go:59] 8 kube-system pods found
	I1025 09:36:38.389561  203993 system_pods.go:61] "coredns-66bc5c9577-dzmkq" [991a35a6-4303-41e2-b7f8-3d267c5fc2ec] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:36:38.389569  203993 system_pods.go:61] "etcd-default-k8s-diff-port-666079" [48199b7d-3848-41a8-ac12-69176ce87480] Running
	I1025 09:36:38.389575  203993 system_pods.go:61] "kindnet-28vnv" [7efe42d1-6ccc-4898-8927-11f06d512ee1] Running
	I1025 09:36:38.389579  203993 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-666079" [ca0bef19-f719-4f47-b127-ea12a2faa23d] Running
	I1025 09:36:38.389584  203993 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-666079" [c17326de-704e-4ba5-ac46-316407ce00f7] Running
	I1025 09:36:38.389589  203993 system_pods.go:61] "kube-proxy-65j7p" [e9d046e5-ee7b-43a7-b854-2597df0f1432] Running
	I1025 09:36:38.389593  203993 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-666079" [99eaee03-92e9-47a5-8e65-8c20a7a00558] Running
	I1025 09:36:38.389599  203993 system_pods.go:61] "storage-provisioner" [ded1f77d-7f3d-48f0-94ef-13367b475def] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:36:38.389606  203993 system_pods.go:74] duration metric: took 3.420831ms to wait for pod list to return data ...
	I1025 09:36:38.389614  203993 default_sa.go:34] waiting for default service account to be created ...
	I1025 09:36:38.396448  203993 default_sa.go:45] found service account: "default"
	I1025 09:36:38.396470  203993 default_sa.go:55] duration metric: took 6.850589ms for default service account to be created ...
	I1025 09:36:38.396480  203993 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 09:36:38.417677  203993 system_pods.go:86] 8 kube-system pods found
	I1025 09:36:38.417761  203993 system_pods.go:89] "coredns-66bc5c9577-dzmkq" [991a35a6-4303-41e2-b7f8-3d267c5fc2ec] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:36:38.417799  203993 system_pods.go:89] "etcd-default-k8s-diff-port-666079" [48199b7d-3848-41a8-ac12-69176ce87480] Running
	I1025 09:36:38.417810  203993 system_pods.go:89] "kindnet-28vnv" [7efe42d1-6ccc-4898-8927-11f06d512ee1] Running
	I1025 09:36:38.417815  203993 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-666079" [ca0bef19-f719-4f47-b127-ea12a2faa23d] Running
	I1025 09:36:38.417820  203993 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-666079" [c17326de-704e-4ba5-ac46-316407ce00f7] Running
	I1025 09:36:38.417825  203993 system_pods.go:89] "kube-proxy-65j7p" [e9d046e5-ee7b-43a7-b854-2597df0f1432] Running
	I1025 09:36:38.417830  203993 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-666079" [99eaee03-92e9-47a5-8e65-8c20a7a00558] Running
	I1025 09:36:38.417838  203993 system_pods.go:89] "storage-provisioner" [ded1f77d-7f3d-48f0-94ef-13367b475def] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:36:38.417889  203993 retry.go:31] will retry after 264.659639ms: missing components: kube-dns
	I1025 09:36:38.706017  203993 system_pods.go:86] 8 kube-system pods found
	I1025 09:36:38.706046  203993 system_pods.go:89] "coredns-66bc5c9577-dzmkq" [991a35a6-4303-41e2-b7f8-3d267c5fc2ec] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:36:38.706053  203993 system_pods.go:89] "etcd-default-k8s-diff-port-666079" [48199b7d-3848-41a8-ac12-69176ce87480] Running
	I1025 09:36:38.706059  203993 system_pods.go:89] "kindnet-28vnv" [7efe42d1-6ccc-4898-8927-11f06d512ee1] Running
	I1025 09:36:38.706064  203993 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-666079" [ca0bef19-f719-4f47-b127-ea12a2faa23d] Running
	I1025 09:36:38.706068  203993 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-666079" [c17326de-704e-4ba5-ac46-316407ce00f7] Running
	I1025 09:36:38.706072  203993 system_pods.go:89] "kube-proxy-65j7p" [e9d046e5-ee7b-43a7-b854-2597df0f1432] Running
	I1025 09:36:38.706076  203993 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-666079" [99eaee03-92e9-47a5-8e65-8c20a7a00558] Running
	I1025 09:36:38.706083  203993 system_pods.go:89] "storage-provisioner" [ded1f77d-7f3d-48f0-94ef-13367b475def] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:36:38.706097  203993 retry.go:31] will retry after 380.355508ms: missing components: kube-dns
	I1025 09:36:39.091191  203993 system_pods.go:86] 8 kube-system pods found
	I1025 09:36:39.091224  203993 system_pods.go:89] "coredns-66bc5c9577-dzmkq" [991a35a6-4303-41e2-b7f8-3d267c5fc2ec] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:36:39.091261  203993 system_pods.go:89] "etcd-default-k8s-diff-port-666079" [48199b7d-3848-41a8-ac12-69176ce87480] Running
	I1025 09:36:39.091269  203993 system_pods.go:89] "kindnet-28vnv" [7efe42d1-6ccc-4898-8927-11f06d512ee1] Running
	I1025 09:36:39.091273  203993 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-666079" [ca0bef19-f719-4f47-b127-ea12a2faa23d] Running
	I1025 09:36:39.091278  203993 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-666079" [c17326de-704e-4ba5-ac46-316407ce00f7] Running
	I1025 09:36:39.091282  203993 system_pods.go:89] "kube-proxy-65j7p" [e9d046e5-ee7b-43a7-b854-2597df0f1432] Running
	I1025 09:36:39.091286  203993 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-666079" [99eaee03-92e9-47a5-8e65-8c20a7a00558] Running
	I1025 09:36:39.091291  203993 system_pods.go:89] "storage-provisioner" [ded1f77d-7f3d-48f0-94ef-13367b475def] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:36:39.091306  203993 retry.go:31] will retry after 469.185972ms: missing components: kube-dns
	I1025 09:36:39.591027  203993 system_pods.go:86] 8 kube-system pods found
	I1025 09:36:39.591055  203993 system_pods.go:89] "coredns-66bc5c9577-dzmkq" [991a35a6-4303-41e2-b7f8-3d267c5fc2ec] Running
	I1025 09:36:39.591063  203993 system_pods.go:89] "etcd-default-k8s-diff-port-666079" [48199b7d-3848-41a8-ac12-69176ce87480] Running
	I1025 09:36:39.591068  203993 system_pods.go:89] "kindnet-28vnv" [7efe42d1-6ccc-4898-8927-11f06d512ee1] Running
	I1025 09:36:39.591073  203993 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-666079" [ca0bef19-f719-4f47-b127-ea12a2faa23d] Running
	I1025 09:36:39.591078  203993 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-666079" [c17326de-704e-4ba5-ac46-316407ce00f7] Running
	I1025 09:36:39.591081  203993 system_pods.go:89] "kube-proxy-65j7p" [e9d046e5-ee7b-43a7-b854-2597df0f1432] Running
	I1025 09:36:39.591085  203993 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-666079" [99eaee03-92e9-47a5-8e65-8c20a7a00558] Running
	I1025 09:36:39.591089  203993 system_pods.go:89] "storage-provisioner" [ded1f77d-7f3d-48f0-94ef-13367b475def] Running
	I1025 09:36:39.591097  203993 system_pods.go:126] duration metric: took 1.19461026s to wait for k8s-apps to be running ...
	I1025 09:36:39.591105  203993 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 09:36:39.591160  203993 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:36:39.620213  203993 system_svc.go:56] duration metric: took 29.097853ms WaitForService to wait for kubelet
	I1025 09:36:39.620239  203993 kubeadm.go:586] duration metric: took 43.556178257s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:36:39.620256  203993 node_conditions.go:102] verifying NodePressure condition ...
	I1025 09:36:39.641042  203993 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 09:36:39.641128  203993 node_conditions.go:123] node cpu capacity is 2
	I1025 09:36:39.641144  203993 node_conditions.go:105] duration metric: took 20.882447ms to run NodePressure ...
	I1025 09:36:39.641157  203993 start.go:241] waiting for startup goroutines ...
	I1025 09:36:39.641164  203993 start.go:246] waiting for cluster config update ...
	I1025 09:36:39.641175  203993 start.go:255] writing updated cluster config ...
	I1025 09:36:39.641540  203993 ssh_runner.go:195] Run: rm -f paused
	I1025 09:36:39.648092  203993 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:36:39.660718  203993 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dzmkq" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:36:39.667907  203993 pod_ready.go:94] pod "coredns-66bc5c9577-dzmkq" is "Ready"
	I1025 09:36:39.667985  203993 pod_ready.go:86] duration metric: took 7.240108ms for pod "coredns-66bc5c9577-dzmkq" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:36:39.672226  203993 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-666079" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:36:39.682557  203993 pod_ready.go:94] pod "etcd-default-k8s-diff-port-666079" is "Ready"
	I1025 09:36:39.682634  203993 pod_ready.go:86] duration metric: took 10.330991ms for pod "etcd-default-k8s-diff-port-666079" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:36:39.685592  203993 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-666079" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:36:39.691481  203993 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-666079" is "Ready"
	I1025 09:36:39.691566  203993 pod_ready.go:86] duration metric: took 5.901016ms for pod "kube-apiserver-default-k8s-diff-port-666079" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:36:39.696198  203993 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-666079" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:36:40.053617  203993 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-666079" is "Ready"
	I1025 09:36:40.053645  203993 pod_ready.go:86] duration metric: took 357.371072ms for pod "kube-controller-manager-default-k8s-diff-port-666079" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:36:40.252956  203993 pod_ready.go:83] waiting for pod "kube-proxy-65j7p" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:36:40.653393  203993 pod_ready.go:94] pod "kube-proxy-65j7p" is "Ready"
	I1025 09:36:40.653426  203993 pod_ready.go:86] duration metric: took 400.440546ms for pod "kube-proxy-65j7p" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:36:40.854040  203993 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-666079" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:36:41.253307  203993 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-666079" is "Ready"
	I1025 09:36:41.253335  203993 pod_ready.go:86] duration metric: took 399.265392ms for pod "kube-scheduler-default-k8s-diff-port-666079" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:36:41.253348  203993 pod_ready.go:40] duration metric: took 1.60522578s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:36:41.353285  203993 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1025 09:36:41.357371  203993 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-666079" cluster and "default" namespace by default
	I1025 09:36:45.821926  210957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.926963852s)
	I1025 09:36:45.822007  210957 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.917916183s)
	I1025 09:36:45.822023  210957 api_server.go:72] duration metric: took 6.411758561s to wait for apiserver process to appear ...
	I1025 09:36:45.822033  210957 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:36:45.822050  210957 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:36:45.822359  210957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.875794516s)
	I1025 09:36:45.850819  210957 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 09:36:45.850844  210957 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 09:36:45.889663  210957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.582428505s)
	I1025 09:36:45.892890  210957 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-052144 addons enable metrics-server
	
	I1025 09:36:45.895838  210957 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1025 09:36:45.898778  210957 addons.go:514] duration metric: took 6.48813245s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1025 09:36:46.322707  210957 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 09:36:46.331370  210957 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1025 09:36:46.332647  210957 api_server.go:141] control plane version: v1.34.1
	I1025 09:36:46.332676  210957 api_server.go:131] duration metric: took 510.63666ms to wait for apiserver health ...
	I1025 09:36:46.332686  210957 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 09:36:46.336167  210957 system_pods.go:59] 8 kube-system pods found
	I1025 09:36:46.336203  210957 system_pods.go:61] "coredns-66bc5c9577-whxdx" [3df2d221-4d0f-4389-b1b1-78c0c980eb77] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1025 09:36:46.336213  210957 system_pods.go:61] "etcd-newest-cni-052144" [a5f918ce-23e0-463a-a637-4ecad2be6163] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 09:36:46.336240  210957 system_pods.go:61] "kindnet-c9wzk" [cf7b8b45-3b46-4a97-8c27-2eca0f408738] Running
	I1025 09:36:46.336254  210957 system_pods.go:61] "kube-apiserver-newest-cni-052144" [c0c3a020-1407-4e0e-9378-3a7d5f49fcd4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 09:36:46.336261  210957 system_pods.go:61] "kube-controller-manager-newest-cni-052144" [3933334f-83c3-43fa-a233-f4931bd7224a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 09:36:46.336266  210957 system_pods.go:61] "kube-proxy-wh72x" [e3f00316-8d1f-4dd3-ad3b-7b973e951dc3] Running
	I1025 09:36:46.336275  210957 system_pods.go:61] "kube-scheduler-newest-cni-052144" [ccba5caf-481b-4b6b-88d0-71e8581766dc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 09:36:46.336281  210957 system_pods.go:61] "storage-provisioner" [c71a8288-c49a-4cf3-a34b-e5b06c1509ac] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1025 09:36:46.336288  210957 system_pods.go:74] duration metric: took 3.595832ms to wait for pod list to return data ...
	I1025 09:36:46.336316  210957 default_sa.go:34] waiting for default service account to be created ...
	I1025 09:36:46.340794  210957 default_sa.go:45] found service account: "default"
	I1025 09:36:46.340869  210957 default_sa.go:55] duration metric: took 4.535273ms for default service account to be created ...
	I1025 09:36:46.340906  210957 kubeadm.go:586] duration metric: took 6.930630187s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1025 09:36:46.340959  210957 node_conditions.go:102] verifying NodePressure condition ...
	I1025 09:36:46.347812  210957 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 09:36:46.347897  210957 node_conditions.go:123] node cpu capacity is 2
	I1025 09:36:46.347923  210957 node_conditions.go:105] duration metric: took 6.944408ms to run NodePressure ...
	I1025 09:36:46.347975  210957 start.go:241] waiting for startup goroutines ...
	I1025 09:36:46.348002  210957 start.go:246] waiting for cluster config update ...
	I1025 09:36:46.348029  210957 start.go:255] writing updated cluster config ...
	I1025 09:36:46.348380  210957 ssh_runner.go:195] Run: rm -f paused
	I1025 09:36:46.441580  210957 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1025 09:36:46.446488  210957 out.go:179] * Done! kubectl is now configured to use "newest-cni-052144" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 25 09:36:38 default-k8s-diff-port-666079 crio[836]: time="2025-10-25T09:36:38.594302715Z" level=info msg="Created container 210229fd897c2a1c647013851f4fccc0054af78cdae72b82785b9fd6a07bfed2: kube-system/coredns-66bc5c9577-dzmkq/coredns" id=e3c2eb4e-1f1c-4c86-bbca-81d88347300c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:36:38 default-k8s-diff-port-666079 crio[836]: time="2025-10-25T09:36:38.596103909Z" level=info msg="Starting container: 210229fd897c2a1c647013851f4fccc0054af78cdae72b82785b9fd6a07bfed2" id=9809ac2c-79f0-4265-ac64-a3a23e95dd83 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:36:38 default-k8s-diff-port-666079 crio[836]: time="2025-10-25T09:36:38.60027185Z" level=info msg="Started container" PID=1734 containerID=210229fd897c2a1c647013851f4fccc0054af78cdae72b82785b9fd6a07bfed2 description=kube-system/coredns-66bc5c9577-dzmkq/coredns id=9809ac2c-79f0-4265-ac64-a3a23e95dd83 name=/runtime.v1.RuntimeService/StartContainer sandboxID=56ab15126f4983a8312ae93873d5ed8d7906a751358ae4a2423c9a0c0d14f19b
	Oct 25 09:36:41 default-k8s-diff-port-666079 crio[836]: time="2025-10-25T09:36:41.99756604Z" level=info msg="Running pod sandbox: default/busybox/POD" id=9eca74ab-5d04-42bc-be10-2544550a66ce name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:36:41 default-k8s-diff-port-666079 crio[836]: time="2025-10-25T09:36:41.99764865Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:36:42 default-k8s-diff-port-666079 crio[836]: time="2025-10-25T09:36:42.009935798Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:0e0433e689889e2e2477c5a7e4fdeca4b6e87c84fdcf4edbef297af8a56420ff UID:8bca5827-d45d-434b-b53a-3f6ea93124bb NetNS:/var/run/netns/d1885b9f-a424-4c7b-a56d-ea8d182efadc Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012d820}] Aliases:map[]}"
	Oct 25 09:36:42 default-k8s-diff-port-666079 crio[836]: time="2025-10-25T09:36:42.010035139Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 25 09:36:42 default-k8s-diff-port-666079 crio[836]: time="2025-10-25T09:36:42.041044004Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:0e0433e689889e2e2477c5a7e4fdeca4b6e87c84fdcf4edbef297af8a56420ff UID:8bca5827-d45d-434b-b53a-3f6ea93124bb NetNS:/var/run/netns/d1885b9f-a424-4c7b-a56d-ea8d182efadc Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012d820}] Aliases:map[]}"
	Oct 25 09:36:42 default-k8s-diff-port-666079 crio[836]: time="2025-10-25T09:36:42.041278821Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 25 09:36:42 default-k8s-diff-port-666079 crio[836]: time="2025-10-25T09:36:42.051630152Z" level=info msg="Ran pod sandbox 0e0433e689889e2e2477c5a7e4fdeca4b6e87c84fdcf4edbef297af8a56420ff with infra container: default/busybox/POD" id=9eca74ab-5d04-42bc-be10-2544550a66ce name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:36:42 default-k8s-diff-port-666079 crio[836]: time="2025-10-25T09:36:42.052918528Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d456c263-43a9-4899-9bc6-d79a8624205e name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:36:42 default-k8s-diff-port-666079 crio[836]: time="2025-10-25T09:36:42.053205892Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=d456c263-43a9-4899-9bc6-d79a8624205e name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:36:42 default-k8s-diff-port-666079 crio[836]: time="2025-10-25T09:36:42.058156373Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=d456c263-43a9-4899-9bc6-d79a8624205e name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:36:42 default-k8s-diff-port-666079 crio[836]: time="2025-10-25T09:36:42.059553353Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=810bd200-3685-4383-b52d-711e7b61af6f name=/runtime.v1.ImageService/PullImage
	Oct 25 09:36:42 default-k8s-diff-port-666079 crio[836]: time="2025-10-25T09:36:42.064899466Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 25 09:36:44 default-k8s-diff-port-666079 crio[836]: time="2025-10-25T09:36:44.211416058Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=810bd200-3685-4383-b52d-711e7b61af6f name=/runtime.v1.ImageService/PullImage
	Oct 25 09:36:44 default-k8s-diff-port-666079 crio[836]: time="2025-10-25T09:36:44.212137978Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1b1d96d5-a251-4ca3-ba23-566d456ee601 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:36:44 default-k8s-diff-port-666079 crio[836]: time="2025-10-25T09:36:44.213972649Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=389ffa75-7be3-4f3b-906d-6d0f318f2e3a name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:36:44 default-k8s-diff-port-666079 crio[836]: time="2025-10-25T09:36:44.221455336Z" level=info msg="Creating container: default/busybox/busybox" id=2abb984f-806f-4aef-becb-ca0bbf069031 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:36:44 default-k8s-diff-port-666079 crio[836]: time="2025-10-25T09:36:44.221585143Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:36:44 default-k8s-diff-port-666079 crio[836]: time="2025-10-25T09:36:44.233829419Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:36:44 default-k8s-diff-port-666079 crio[836]: time="2025-10-25T09:36:44.234375304Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:36:44 default-k8s-diff-port-666079 crio[836]: time="2025-10-25T09:36:44.26624881Z" level=info msg="Created container 571869f4434f0facdb885dfb454bd99824208f972dd83f75f70a465e1f644f80: default/busybox/busybox" id=2abb984f-806f-4aef-becb-ca0bbf069031 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:36:44 default-k8s-diff-port-666079 crio[836]: time="2025-10-25T09:36:44.26951508Z" level=info msg="Starting container: 571869f4434f0facdb885dfb454bd99824208f972dd83f75f70a465e1f644f80" id=65e30dd6-9248-47b0-b118-8eb96371868b name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:36:44 default-k8s-diff-port-666079 crio[836]: time="2025-10-25T09:36:44.274327098Z" level=info msg="Started container" PID=1789 containerID=571869f4434f0facdb885dfb454bd99824208f972dd83f75f70a465e1f644f80 description=default/busybox/busybox id=65e30dd6-9248-47b0-b118-8eb96371868b name=/runtime.v1.RuntimeService/StartContainer sandboxID=0e0433e689889e2e2477c5a7e4fdeca4b6e87c84fdcf4edbef297af8a56420ff
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	571869f4434f0       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago        Running             busybox                   0                   0e0433e689889       busybox                                                default
	210229fd897c2       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      13 seconds ago       Running             coredns                   0                   56ab15126f498       coredns-66bc5c9577-dzmkq                               kube-system
	bd3ff702e7814       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago       Running             storage-provisioner       0                   2d09254544f6f       storage-provisioner                                    kube-system
	9b15b86eea542       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      54 seconds ago       Running             kindnet-cni               0                   4b32ec9191c05       kindnet-28vnv                                          kube-system
	e3bba85d149e6       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      54 seconds ago       Running             kube-proxy                0                   28d1cb65a1ae8       kube-proxy-65j7p                                       kube-system
	06d9f811283c8       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   f15cab575a58e       etcd-default-k8s-diff-port-666079                      kube-system
	194adf2ca2226       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   5097e78c060fc       kube-scheduler-default-k8s-diff-port-666079            kube-system
	80e6e88d95f5f       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   176e25cf4f8b3       kube-apiserver-default-k8s-diff-port-666079            kube-system
	b3078b66ec286       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   8b83c07c8fdba       kube-controller-manager-default-k8s-diff-port-666079   kube-system
	
	
	==> coredns [210229fd897c2a1c647013851f4fccc0054af78cdae72b82785b9fd6a07bfed2] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57221 - 51276 "HINFO IN 4786708497132938152.1846735787688517692. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013745454s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-666079
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-666079
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3de5ce1075e776fa55a26fe8396669cc53a4373
	                    minikube.k8s.io/name=default-k8s-diff-port-666079
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_35_52_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:35:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-666079
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:36:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:36:38 +0000   Sat, 25 Oct 2025 09:35:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:36:38 +0000   Sat, 25 Oct 2025 09:35:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:36:38 +0000   Sat, 25 Oct 2025 09:35:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:36:38 +0000   Sat, 25 Oct 2025 09:36:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-666079
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                492daa44-3080-463c-abfd-050b629beadb
	  Boot ID:                    89aa6167-e700-462e-8969-7739a9f33dc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-dzmkq                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     55s
	  kube-system                 etcd-default-k8s-diff-port-666079                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         60s
	  kube-system                 kindnet-28vnv                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      55s
	  kube-system                 kube-apiserver-default-k8s-diff-port-666079             250m (12%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-666079    200m (10%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-proxy-65j7p                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 kube-scheduler-default-k8s-diff-port-666079             100m (5%)     0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 54s                kube-proxy       
	  Warning  CgroupV1                 71s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  70s (x8 over 71s)  kubelet          Node default-k8s-diff-port-666079 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    70s (x8 over 71s)  kubelet          Node default-k8s-diff-port-666079 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     70s (x8 over 71s)  kubelet          Node default-k8s-diff-port-666079 status is now: NodeHasSufficientPID
	  Normal   Starting                 61s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  60s                kubelet          Node default-k8s-diff-port-666079 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s                kubelet          Node default-k8s-diff-port-666079 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s                kubelet          Node default-k8s-diff-port-666079 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           56s                node-controller  Node default-k8s-diff-port-666079 event: Registered Node default-k8s-diff-port-666079 in Controller
	  Normal   NodeReady                13s                kubelet          Node default-k8s-diff-port-666079 status is now: NodeReady
	
	
	==> dmesg <==
	[ +13.271347] overlayfs: idmapped layers are currently not supported
	[Oct25 09:14] overlayfs: idmapped layers are currently not supported
	[Oct25 09:15] overlayfs: idmapped layers are currently not supported
	[ +40.253798] overlayfs: idmapped layers are currently not supported
	[Oct25 09:16] overlayfs: idmapped layers are currently not supported
	[Oct25 09:17] overlayfs: idmapped layers are currently not supported
	[Oct25 09:18] overlayfs: idmapped layers are currently not supported
	[  +9.191019] hrtimer: interrupt took 13462946 ns
	[Oct25 09:20] overlayfs: idmapped layers are currently not supported
	[Oct25 09:22] overlayfs: idmapped layers are currently not supported
	[Oct25 09:23] overlayfs: idmapped layers are currently not supported
	[Oct25 09:24] overlayfs: idmapped layers are currently not supported
	[Oct25 09:28] overlayfs: idmapped layers are currently not supported
	[ +37.283444] overlayfs: idmapped layers are currently not supported
	[Oct25 09:29] overlayfs: idmapped layers are currently not supported
	[ +38.328802] overlayfs: idmapped layers are currently not supported
	[Oct25 09:30] overlayfs: idmapped layers are currently not supported
	[Oct25 09:31] overlayfs: idmapped layers are currently not supported
	[Oct25 09:32] overlayfs: idmapped layers are currently not supported
	[Oct25 09:33] overlayfs: idmapped layers are currently not supported
	[Oct25 09:34] overlayfs: idmapped layers are currently not supported
	[ +28.289706] overlayfs: idmapped layers are currently not supported
	[Oct25 09:35] overlayfs: idmapped layers are currently not supported
	[Oct25 09:36] overlayfs: idmapped layers are currently not supported
	[ +24.160248] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [06d9f811283c8de4a149e059053719a02703e0480b7dd98d206735c5ee1e642c] <==
	{"level":"warn","ts":"2025-10-25T09:35:45.181910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:35:45.216405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:35:45.263070Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:35:45.284666Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:35:45.324915Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:35:45.357251Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:35:45.385577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:35:45.459313Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:35:45.524846Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:35:45.590927Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:35:45.657395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:35:45.702952Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:35:45.744607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:35:45.772667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:35:45.811236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:35:45.839266Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:35:45.859988Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:35:45.884065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:35:45.899422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:35:45.918190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:35:45.933615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:35:45.990631Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:35:46.010752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:35:46.039453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:35:46.160628Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37500","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:36:52 up  1:19,  0 user,  load average: 3.58, 3.76, 3.04
	Linux default-k8s-diff-port-666079 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9b15b86eea5420dc018cec03d35444dba4215e211c5d2aff0577378b82818603] <==
	I1025 09:35:57.616579       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 09:35:57.616883       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1025 09:35:57.617000       1 main.go:148] setting mtu 1500 for CNI 
	I1025 09:35:57.617013       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 09:35:57.617027       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T09:35:57Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 09:35:57.817394       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 09:35:57.817466       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 09:35:57.817500       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 09:35:57.818376       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1025 09:36:27.818344       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1025 09:36:27.818465       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1025 09:36:27.818546       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1025 09:36:27.818658       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1025 09:36:29.417773       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 09:36:29.417814       1 metrics.go:72] Registering metrics
	I1025 09:36:29.417868       1 controller.go:711] "Syncing nftables rules"
	I1025 09:36:37.825370       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1025 09:36:37.825423       1 main.go:301] handling current node
	I1025 09:36:47.818129       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1025 09:36:47.818249       1 main.go:301] handling current node
	
	
	==> kube-apiserver [80e6e88d95f5fe5653ad6d379d5050d69b9910288b8eb90d2923a27cf3845589] <==
	E1025 09:35:47.792487       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1025 09:35:47.806267       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1025 09:35:47.806308       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1025 09:35:47.833590       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:35:47.844089       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1025 09:35:47.888519       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1025 09:35:47.904691       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:35:48.014556       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 09:35:48.248877       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1025 09:35:48.269920       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1025 09:35:48.272786       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 09:35:49.617105       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 09:35:49.702416       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 09:35:49.807549       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1025 09:35:49.817649       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1025 09:35:49.819561       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 09:35:49.831748       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 09:35:50.614286       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 09:35:50.892792       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 09:35:50.917909       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1025 09:35:50.946100       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1025 09:35:56.365691       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 09:35:56.515038       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:35:56.597725       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:35:56.812808       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [b3078b66ec2869526ba6b604f1f2e849d3b22c0f27e8ea4d5c70209cf396e251] <==
	I1025 09:35:55.759814       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1025 09:35:55.761134       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1025 09:35:55.765380       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1025 09:35:55.769910       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-666079" podCIDRs=["10.244.0.0/24"]
	I1025 09:35:55.778138       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1025 09:35:55.779618       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1025 09:35:55.785037       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1025 09:35:55.787252       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1025 09:35:55.790066       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1025 09:35:55.804989       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1025 09:35:55.805126       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1025 09:35:55.805203       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-666079"
	I1025 09:35:55.805561       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1025 09:35:55.808569       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1025 09:35:55.814650       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 09:35:55.814872       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1025 09:35:55.818750       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 09:35:55.819567       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1025 09:35:55.819651       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1025 09:35:55.820384       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:35:55.921046       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:35:55.952004       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:35:55.952109       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 09:35:55.952140       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 09:36:41.038443       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [e3bba85d149e6e15a62345aa42fedd6a6fd57bfcdd2b82a58b063a532832da01] <==
	I1025 09:35:57.562977       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:35:57.651720       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:35:57.752776       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:35:57.752889       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1025 09:35:57.752987       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:35:57.771640       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:35:57.771706       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:35:57.775313       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:35:57.775638       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:35:57.775663       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:35:57.777223       1 config.go:200] "Starting service config controller"
	I1025 09:35:57.777244       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:35:57.777262       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:35:57.777266       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:35:57.777276       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:35:57.777281       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:35:57.777893       1 config.go:309] "Starting node config controller"
	I1025 09:35:57.777909       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:35:57.777916       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:35:57.877967       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 09:35:57.878037       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 09:35:57.878097       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [194adf2ca2226ea0d6d3c229a70813f1c009dc84495cef97046fa10ed117ff6c] <==
	I1025 09:35:48.609637       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:35:48.612176       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:35:48.612304       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:35:48.615678       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 09:35:48.615831       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1025 09:35:48.625798       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1025 09:35:48.653096       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 09:35:48.653284       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 09:35:48.653408       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 09:35:48.653507       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 09:35:48.653625       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1025 09:35:48.663734       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 09:35:48.663845       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 09:35:48.663948       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 09:35:48.664016       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1025 09:35:48.664062       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 09:35:48.664112       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 09:35:48.664160       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 09:35:48.664166       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 09:35:48.664209       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 09:35:48.664249       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 09:35:48.664287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 09:35:48.664327       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 09:35:48.664410       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1025 09:35:49.812926       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 09:35:55 default-k8s-diff-port-666079 kubelet[1307]: I1025 09:35:55.816668    1307 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 25 09:35:55 default-k8s-diff-port-666079 kubelet[1307]: I1025 09:35:55.817842    1307 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 25 09:35:57 default-k8s-diff-port-666079 kubelet[1307]: I1025 09:35:57.101230    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e9d046e5-ee7b-43a7-b854-2597df0f1432-xtables-lock\") pod \"kube-proxy-65j7p\" (UID: \"e9d046e5-ee7b-43a7-b854-2597df0f1432\") " pod="kube-system/kube-proxy-65j7p"
	Oct 25 09:35:57 default-k8s-diff-port-666079 kubelet[1307]: I1025 09:35:57.101276    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7efe42d1-6ccc-4898-8927-11f06d512ee1-xtables-lock\") pod \"kindnet-28vnv\" (UID: \"7efe42d1-6ccc-4898-8927-11f06d512ee1\") " pod="kube-system/kindnet-28vnv"
	Oct 25 09:35:57 default-k8s-diff-port-666079 kubelet[1307]: I1025 09:35:57.101297    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdcxr\" (UniqueName: \"kubernetes.io/projected/7efe42d1-6ccc-4898-8927-11f06d512ee1-kube-api-access-vdcxr\") pod \"kindnet-28vnv\" (UID: \"7efe42d1-6ccc-4898-8927-11f06d512ee1\") " pod="kube-system/kindnet-28vnv"
	Oct 25 09:35:57 default-k8s-diff-port-666079 kubelet[1307]: I1025 09:35:57.101319    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e9d046e5-ee7b-43a7-b854-2597df0f1432-lib-modules\") pod \"kube-proxy-65j7p\" (UID: \"e9d046e5-ee7b-43a7-b854-2597df0f1432\") " pod="kube-system/kube-proxy-65j7p"
	Oct 25 09:35:57 default-k8s-diff-port-666079 kubelet[1307]: I1025 09:35:57.101338    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/7efe42d1-6ccc-4898-8927-11f06d512ee1-cni-cfg\") pod \"kindnet-28vnv\" (UID: \"7efe42d1-6ccc-4898-8927-11f06d512ee1\") " pod="kube-system/kindnet-28vnv"
	Oct 25 09:35:57 default-k8s-diff-port-666079 kubelet[1307]: I1025 09:35:57.101353    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7efe42d1-6ccc-4898-8927-11f06d512ee1-lib-modules\") pod \"kindnet-28vnv\" (UID: \"7efe42d1-6ccc-4898-8927-11f06d512ee1\") " pod="kube-system/kindnet-28vnv"
	Oct 25 09:35:57 default-k8s-diff-port-666079 kubelet[1307]: I1025 09:35:57.101369    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e9d046e5-ee7b-43a7-b854-2597df0f1432-kube-proxy\") pod \"kube-proxy-65j7p\" (UID: \"e9d046e5-ee7b-43a7-b854-2597df0f1432\") " pod="kube-system/kube-proxy-65j7p"
	Oct 25 09:35:57 default-k8s-diff-port-666079 kubelet[1307]: I1025 09:35:57.101385    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs6zp\" (UniqueName: \"kubernetes.io/projected/e9d046e5-ee7b-43a7-b854-2597df0f1432-kube-api-access-fs6zp\") pod \"kube-proxy-65j7p\" (UID: \"e9d046e5-ee7b-43a7-b854-2597df0f1432\") " pod="kube-system/kube-proxy-65j7p"
	Oct 25 09:35:57 default-k8s-diff-port-666079 kubelet[1307]: I1025 09:35:57.266539    1307 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 25 09:35:57 default-k8s-diff-port-666079 kubelet[1307]: W1025 09:35:57.431108    1307 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/957d2a4135a8dcb0784416c8d5211d69b209106d0d8dfa7985ab81c8fdcf8862/crio-4b32ec9191c05f5b80627beeacbd4078259a77f312d15c707a095117d22fe7eb WatchSource:0}: Error finding container 4b32ec9191c05f5b80627beeacbd4078259a77f312d15c707a095117d22fe7eb: Status 404 returned error can't find the container with id 4b32ec9191c05f5b80627beeacbd4078259a77f312d15c707a095117d22fe7eb
	Oct 25 09:35:58 default-k8s-diff-port-666079 kubelet[1307]: I1025 09:35:58.268188    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-28vnv" podStartSLOduration=2.268168493 podStartE2EDuration="2.268168493s" podCreationTimestamp="2025-10-25 09:35:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:35:58.254472701 +0000 UTC m=+7.465304580" watchObservedRunningTime="2025-10-25 09:35:58.268168493 +0000 UTC m=+7.479000364"
	Oct 25 09:35:58 default-k8s-diff-port-666079 kubelet[1307]: I1025 09:35:58.288725    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-65j7p" podStartSLOduration=2.288705252 podStartE2EDuration="2.288705252s" podCreationTimestamp="2025-10-25 09:35:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:35:58.269868936 +0000 UTC m=+7.480700823" watchObservedRunningTime="2025-10-25 09:35:58.288705252 +0000 UTC m=+7.499537131"
	Oct 25 09:36:38 default-k8s-diff-port-666079 kubelet[1307]: I1025 09:36:38.037205    1307 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 25 09:36:38 default-k8s-diff-port-666079 kubelet[1307]: I1025 09:36:38.108939    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkqbl\" (UniqueName: \"kubernetes.io/projected/ded1f77d-7f3d-48f0-94ef-13367b475def-kube-api-access-jkqbl\") pod \"storage-provisioner\" (UID: \"ded1f77d-7f3d-48f0-94ef-13367b475def\") " pod="kube-system/storage-provisioner"
	Oct 25 09:36:38 default-k8s-diff-port-666079 kubelet[1307]: I1025 09:36:38.109148    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/991a35a6-4303-41e2-b7f8-3d267c5fc2ec-config-volume\") pod \"coredns-66bc5c9577-dzmkq\" (UID: \"991a35a6-4303-41e2-b7f8-3d267c5fc2ec\") " pod="kube-system/coredns-66bc5c9577-dzmkq"
	Oct 25 09:36:38 default-k8s-diff-port-666079 kubelet[1307]: I1025 09:36:38.109233    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ded1f77d-7f3d-48f0-94ef-13367b475def-tmp\") pod \"storage-provisioner\" (UID: \"ded1f77d-7f3d-48f0-94ef-13367b475def\") " pod="kube-system/storage-provisioner"
	Oct 25 09:36:38 default-k8s-diff-port-666079 kubelet[1307]: I1025 09:36:38.109310    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mptk2\" (UniqueName: \"kubernetes.io/projected/991a35a6-4303-41e2-b7f8-3d267c5fc2ec-kube-api-access-mptk2\") pod \"coredns-66bc5c9577-dzmkq\" (UID: \"991a35a6-4303-41e2-b7f8-3d267c5fc2ec\") " pod="kube-system/coredns-66bc5c9577-dzmkq"
	Oct 25 09:36:38 default-k8s-diff-port-666079 kubelet[1307]: W1025 09:36:38.497004    1307 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/957d2a4135a8dcb0784416c8d5211d69b209106d0d8dfa7985ab81c8fdcf8862/crio-56ab15126f4983a8312ae93873d5ed8d7906a751358ae4a2423c9a0c0d14f19b WatchSource:0}: Error finding container 56ab15126f4983a8312ae93873d5ed8d7906a751358ae4a2423c9a0c0d14f19b: Status 404 returned error can't find the container with id 56ab15126f4983a8312ae93873d5ed8d7906a751358ae4a2423c9a0c0d14f19b
	Oct 25 09:36:39 default-k8s-diff-port-666079 kubelet[1307]: I1025 09:36:39.510156    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-dzmkq" podStartSLOduration=43.510137654 podStartE2EDuration="43.510137654s" podCreationTimestamp="2025-10-25 09:35:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:36:39.433711289 +0000 UTC m=+48.644543160" watchObservedRunningTime="2025-10-25 09:36:39.510137654 +0000 UTC m=+48.720969525"
	Oct 25 09:36:39 default-k8s-diff-port-666079 kubelet[1307]: I1025 09:36:39.554160    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=42.554129906 podStartE2EDuration="42.554129906s" podCreationTimestamp="2025-10-25 09:35:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 09:36:39.510522249 +0000 UTC m=+48.721354120" watchObservedRunningTime="2025-10-25 09:36:39.554129906 +0000 UTC m=+48.764961785"
	Oct 25 09:36:41 default-k8s-diff-port-666079 kubelet[1307]: I1025 09:36:41.839807    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rplh8\" (UniqueName: \"kubernetes.io/projected/8bca5827-d45d-434b-b53a-3f6ea93124bb-kube-api-access-rplh8\") pod \"busybox\" (UID: \"8bca5827-d45d-434b-b53a-3f6ea93124bb\") " pod="default/busybox"
	Oct 25 09:36:42 default-k8s-diff-port-666079 kubelet[1307]: W1025 09:36:42.050089    1307 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/957d2a4135a8dcb0784416c8d5211d69b209106d0d8dfa7985ab81c8fdcf8862/crio-0e0433e689889e2e2477c5a7e4fdeca4b6e87c84fdcf4edbef297af8a56420ff WatchSource:0}: Error finding container 0e0433e689889e2e2477c5a7e4fdeca4b6e87c84fdcf4edbef297af8a56420ff: Status 404 returned error can't find the container with id 0e0433e689889e2e2477c5a7e4fdeca4b6e87c84fdcf4edbef297af8a56420ff
	Oct 25 09:36:44 default-k8s-diff-port-666079 kubelet[1307]: I1025 09:36:44.436102    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.281532911 podStartE2EDuration="3.436081595s" podCreationTimestamp="2025-10-25 09:36:41 +0000 UTC" firstStartedPulling="2025-10-25 09:36:42.058653634 +0000 UTC m=+51.269485505" lastFinishedPulling="2025-10-25 09:36:44.21320231 +0000 UTC m=+53.424034189" observedRunningTime="2025-10-25 09:36:44.435207903 +0000 UTC m=+53.646039774" watchObservedRunningTime="2025-10-25 09:36:44.436081595 +0000 UTC m=+53.646913466"
	
	
	==> storage-provisioner [bd3ff702e78145b334df94c3477f1b72cc77eb89ee1500ceb6da36495daf141a] <==
	I1025 09:36:38.573761       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 09:36:38.682940       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 09:36:38.682991       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1025 09:36:38.692619       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:36:38.706684       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 09:36:38.706838       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 09:36:38.707006       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-666079_3e4b6c6f-e91c-4358-8b47-2f660e5d9bfa!
	I1025 09:36:38.707287       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a28afa3c-22cb-41cd-9bf1-a7e2b455d9f3", APIVersion:"v1", ResourceVersion:"465", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-666079_3e4b6c6f-e91c-4358-8b47-2f660e5d9bfa became leader
	W1025 09:36:38.718454       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:36:38.724195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 09:36:38.811118       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-666079_3e4b6c6f-e91c-4358-8b47-2f660e5d9bfa!
	W1025 09:36:40.733659       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:36:40.738197       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:36:42.741553       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:36:42.765581       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:36:44.769387       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:36:44.779772       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:36:46.783736       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:36:46.793701       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:36:48.798881       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:36:48.805710       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:36:50.809627       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:36:50.824248       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-666079 -n default-k8s-diff-port-666079
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-666079 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (6.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-666079 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p default-k8s-diff-port-666079 --alsologtostderr -v=1: exit status 80 (2.509536741s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-666079 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:38:12.314902  219552 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:38:12.315052  219552 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:38:12.315063  219552 out.go:374] Setting ErrFile to fd 2...
	I1025 09:38:12.315069  219552 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:38:12.315346  219552 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-2312/.minikube/bin
	I1025 09:38:12.315617  219552 out.go:368] Setting JSON to false
	I1025 09:38:12.315643  219552 mustload.go:65] Loading cluster: default-k8s-diff-port-666079
	I1025 09:38:12.316013  219552 config.go:182] Loaded profile config "default-k8s-diff-port-666079": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:38:12.316565  219552 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-666079 --format={{.State.Status}}
	I1025 09:38:12.337232  219552 host.go:66] Checking if "default-k8s-diff-port-666079" exists ...
	I1025 09:38:12.337621  219552 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:38:12.401820  219552 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-25 09:38:12.390569435 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:38:12.402714  219552 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-666079 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1025 09:38:12.409432  219552 out.go:179] * Pausing node default-k8s-diff-port-666079 ... 
	I1025 09:38:12.412941  219552 host.go:66] Checking if "default-k8s-diff-port-666079" exists ...
	I1025 09:38:12.413439  219552 ssh_runner.go:195] Run: systemctl --version
	I1025 09:38:12.413499  219552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-666079
	I1025 09:38:12.432262  219552 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/default-k8s-diff-port-666079/id_rsa Username:docker}
	I1025 09:38:12.536524  219552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:38:12.549561  219552 pause.go:52] kubelet running: true
	I1025 09:38:12.549654  219552 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 09:38:12.801266  219552 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 09:38:12.801351  219552 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 09:38:12.875801  219552 cri.go:89] found id: "fcd374f1688b8ed4e7571ec994e121446bd298b6684eee33d3b3ab788c09fd2a"
	I1025 09:38:12.875824  219552 cri.go:89] found id: "6c54fce55676c84d4384dd7ac96ecf2530d5a363686e91690dc3545792bcc0b6"
	I1025 09:38:12.875830  219552 cri.go:89] found id: "d2e32fa53d02a52e39f9a3c61406c3eba615d2628a65822c3a98cee9707208b7"
	I1025 09:38:12.875834  219552 cri.go:89] found id: "e166325c1923d08d8647a1a3c29bf323317468e389b1f4993ca6afafc167012d"
	I1025 09:38:12.875837  219552 cri.go:89] found id: "bcf200eeb50f5e2d26ad7b92d990c6b3d8d58108b4336e8005c6dfaaaa9cbc6b"
	I1025 09:38:12.875841  219552 cri.go:89] found id: "c26bf38fd7e4b9f51947f954e4ee102888ffd02a198adb203972580c4eb3c74d"
	I1025 09:38:12.875844  219552 cri.go:89] found id: "93c1d103bf05eb8996db42684ab453c3e8a59e4287467d1fb344225e54155651"
	I1025 09:38:12.875847  219552 cri.go:89] found id: "fe95bac5f1e76131716e125587dd727d7db7bdabeed57b1078cc75158bc0da09"
	I1025 09:38:12.875850  219552 cri.go:89] found id: "36dbd5d0fba8fd463698b1cfb95820a97032c9e08ee3218bc4e23d5db821fa62"
	I1025 09:38:12.875862  219552 cri.go:89] found id: "7ba993183832edaaf183af2a7b8cff0dbc3f87072503fc42186cda8f2ee1e23c"
	I1025 09:38:12.875871  219552 cri.go:89] found id: "8fbb9eadefbd80b899692ec9dd8c86fba760ca25136cdb11e58fcf1c5b382d3f"
	I1025 09:38:12.875874  219552 cri.go:89] found id: ""
	I1025 09:38:12.875921  219552 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:38:12.887074  219552 retry.go:31] will retry after 153.40321ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:38:12Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:38:13.041524  219552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:38:13.054686  219552 pause.go:52] kubelet running: false
	I1025 09:38:13.054799  219552 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 09:38:13.218301  219552 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 09:38:13.218379  219552 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 09:38:13.289924  219552 cri.go:89] found id: "fcd374f1688b8ed4e7571ec994e121446bd298b6684eee33d3b3ab788c09fd2a"
	I1025 09:38:13.289946  219552 cri.go:89] found id: "6c54fce55676c84d4384dd7ac96ecf2530d5a363686e91690dc3545792bcc0b6"
	I1025 09:38:13.289950  219552 cri.go:89] found id: "d2e32fa53d02a52e39f9a3c61406c3eba615d2628a65822c3a98cee9707208b7"
	I1025 09:38:13.289953  219552 cri.go:89] found id: "e166325c1923d08d8647a1a3c29bf323317468e389b1f4993ca6afafc167012d"
	I1025 09:38:13.289956  219552 cri.go:89] found id: "bcf200eeb50f5e2d26ad7b92d990c6b3d8d58108b4336e8005c6dfaaaa9cbc6b"
	I1025 09:38:13.289960  219552 cri.go:89] found id: "c26bf38fd7e4b9f51947f954e4ee102888ffd02a198adb203972580c4eb3c74d"
	I1025 09:38:13.289963  219552 cri.go:89] found id: "93c1d103bf05eb8996db42684ab453c3e8a59e4287467d1fb344225e54155651"
	I1025 09:38:13.289966  219552 cri.go:89] found id: "fe95bac5f1e76131716e125587dd727d7db7bdabeed57b1078cc75158bc0da09"
	I1025 09:38:13.289968  219552 cri.go:89] found id: "36dbd5d0fba8fd463698b1cfb95820a97032c9e08ee3218bc4e23d5db821fa62"
	I1025 09:38:13.289975  219552 cri.go:89] found id: "7ba993183832edaaf183af2a7b8cff0dbc3f87072503fc42186cda8f2ee1e23c"
	I1025 09:38:13.289978  219552 cri.go:89] found id: "8fbb9eadefbd80b899692ec9dd8c86fba760ca25136cdb11e58fcf1c5b382d3f"
	I1025 09:38:13.289996  219552 cri.go:89] found id: ""
	I1025 09:38:13.290050  219552 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:38:13.301592  219552 retry.go:31] will retry after 351.971575ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:38:13Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:38:13.654109  219552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:38:13.667377  219552 pause.go:52] kubelet running: false
	I1025 09:38:13.667456  219552 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 09:38:13.837776  219552 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 09:38:13.837895  219552 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 09:38:13.913609  219552 cri.go:89] found id: "fcd374f1688b8ed4e7571ec994e121446bd298b6684eee33d3b3ab788c09fd2a"
	I1025 09:38:13.913707  219552 cri.go:89] found id: "6c54fce55676c84d4384dd7ac96ecf2530d5a363686e91690dc3545792bcc0b6"
	I1025 09:38:13.913728  219552 cri.go:89] found id: "d2e32fa53d02a52e39f9a3c61406c3eba615d2628a65822c3a98cee9707208b7"
	I1025 09:38:13.913759  219552 cri.go:89] found id: "e166325c1923d08d8647a1a3c29bf323317468e389b1f4993ca6afafc167012d"
	I1025 09:38:13.913783  219552 cri.go:89] found id: "bcf200eeb50f5e2d26ad7b92d990c6b3d8d58108b4336e8005c6dfaaaa9cbc6b"
	I1025 09:38:13.913804  219552 cri.go:89] found id: "c26bf38fd7e4b9f51947f954e4ee102888ffd02a198adb203972580c4eb3c74d"
	I1025 09:38:13.913828  219552 cri.go:89] found id: "93c1d103bf05eb8996db42684ab453c3e8a59e4287467d1fb344225e54155651"
	I1025 09:38:13.913863  219552 cri.go:89] found id: "fe95bac5f1e76131716e125587dd727d7db7bdabeed57b1078cc75158bc0da09"
	I1025 09:38:13.913889  219552 cri.go:89] found id: "36dbd5d0fba8fd463698b1cfb95820a97032c9e08ee3218bc4e23d5db821fa62"
	I1025 09:38:13.913916  219552 cri.go:89] found id: "7ba993183832edaaf183af2a7b8cff0dbc3f87072503fc42186cda8f2ee1e23c"
	I1025 09:38:13.913940  219552 cri.go:89] found id: "8fbb9eadefbd80b899692ec9dd8c86fba760ca25136cdb11e58fcf1c5b382d3f"
	I1025 09:38:13.913973  219552 cri.go:89] found id: ""
	I1025 09:38:13.914168  219552 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:38:13.926227  219552 retry.go:31] will retry after 547.694805ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:38:13Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:38:14.475048  219552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:38:14.488345  219552 pause.go:52] kubelet running: false
	I1025 09:38:14.488451  219552 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 09:38:14.660731  219552 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 09:38:14.660852  219552 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 09:38:14.739240  219552 cri.go:89] found id: "fcd374f1688b8ed4e7571ec994e121446bd298b6684eee33d3b3ab788c09fd2a"
	I1025 09:38:14.739278  219552 cri.go:89] found id: "6c54fce55676c84d4384dd7ac96ecf2530d5a363686e91690dc3545792bcc0b6"
	I1025 09:38:14.739284  219552 cri.go:89] found id: "d2e32fa53d02a52e39f9a3c61406c3eba615d2628a65822c3a98cee9707208b7"
	I1025 09:38:14.739288  219552 cri.go:89] found id: "e166325c1923d08d8647a1a3c29bf323317468e389b1f4993ca6afafc167012d"
	I1025 09:38:14.739310  219552 cri.go:89] found id: "bcf200eeb50f5e2d26ad7b92d990c6b3d8d58108b4336e8005c6dfaaaa9cbc6b"
	I1025 09:38:14.739322  219552 cri.go:89] found id: "c26bf38fd7e4b9f51947f954e4ee102888ffd02a198adb203972580c4eb3c74d"
	I1025 09:38:14.739326  219552 cri.go:89] found id: "93c1d103bf05eb8996db42684ab453c3e8a59e4287467d1fb344225e54155651"
	I1025 09:38:14.739329  219552 cri.go:89] found id: "fe95bac5f1e76131716e125587dd727d7db7bdabeed57b1078cc75158bc0da09"
	I1025 09:38:14.739332  219552 cri.go:89] found id: "36dbd5d0fba8fd463698b1cfb95820a97032c9e08ee3218bc4e23d5db821fa62"
	I1025 09:38:14.739366  219552 cri.go:89] found id: "7ba993183832edaaf183af2a7b8cff0dbc3f87072503fc42186cda8f2ee1e23c"
	I1025 09:38:14.739377  219552 cri.go:89] found id: "8fbb9eadefbd80b899692ec9dd8c86fba760ca25136cdb11e58fcf1c5b382d3f"
	I1025 09:38:14.739381  219552 cri.go:89] found id: ""
	I1025 09:38:14.739456  219552 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:38:14.754197  219552 out.go:203] 
	W1025 09:38:14.757090  219552 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:38:14Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:38:14Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:38:14.757116  219552 out.go:285] * 
	* 
	W1025 09:38:14.762184  219552 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:38:14.765258  219552 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p default-k8s-diff-port-666079 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-666079
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-666079:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "957d2a4135a8dcb0784416c8d5211d69b209106d0d8dfa7985ab81c8fdcf8862",
	        "Created": "2025-10-25T09:35:22.279167682Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 216468,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:37:05.932876673Z",
	            "FinishedAt": "2025-10-25T09:37:04.909610866Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/957d2a4135a8dcb0784416c8d5211d69b209106d0d8dfa7985ab81c8fdcf8862/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/957d2a4135a8dcb0784416c8d5211d69b209106d0d8dfa7985ab81c8fdcf8862/hostname",
	        "HostsPath": "/var/lib/docker/containers/957d2a4135a8dcb0784416c8d5211d69b209106d0d8dfa7985ab81c8fdcf8862/hosts",
	        "LogPath": "/var/lib/docker/containers/957d2a4135a8dcb0784416c8d5211d69b209106d0d8dfa7985ab81c8fdcf8862/957d2a4135a8dcb0784416c8d5211d69b209106d0d8dfa7985ab81c8fdcf8862-json.log",
	        "Name": "/default-k8s-diff-port-666079",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-666079:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-666079",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "957d2a4135a8dcb0784416c8d5211d69b209106d0d8dfa7985ab81c8fdcf8862",
	                "LowerDir": "/var/lib/docker/overlay2/5a2d5db135a98df69094a4c9e2b07f83f6518ae35a00f1aa521570af97c1888e-init/diff:/var/lib/docker/overlay2/cef79fc16ca3e257f9c9390d34fae091a7f7fa219913b2606caf1922ea50ed93/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5a2d5db135a98df69094a4c9e2b07f83f6518ae35a00f1aa521570af97c1888e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5a2d5db135a98df69094a4c9e2b07f83f6518ae35a00f1aa521570af97c1888e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5a2d5db135a98df69094a4c9e2b07f83f6518ae35a00f1aa521570af97c1888e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-666079",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-666079/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-666079",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-666079",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-666079",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c5ec8df35b5632d691aa0030a2690f1f0c45149472d5069b8fc7096388cff0f6",
	            "SandboxKey": "/var/run/docker/netns/c5ec8df35b56",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-666079": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "22:8e:44:79:f7:6e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fca20c11b6d784ec6e97d5309475016004c54db4ff0e1ebce1147f0efda81f09",
	                    "EndpointID": "6098d684082efe9a456343f860d07b14b985e08abf67e62215a74dcd6b756080",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-666079",
	                        "957d2a4135a8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-666079 -n default-k8s-diff-port-666079
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-666079 -n default-k8s-diff-port-666079: exit status 2 (382.141753ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-666079 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-666079 logs -n 25: (1.298764149s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p no-preload-179869 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-179869            │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │                     │
	│ delete  │ -p no-preload-179869                                                                                                                                                                                                                          │ no-preload-179869            │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
	│ delete  │ -p no-preload-179869                                                                                                                                                                                                                          │ no-preload-179869            │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
	│ delete  │ -p disable-driver-mounts-901717                                                                                                                                                                                                               │ disable-driver-mounts-901717 │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
	│ start   │ -p default-k8s-diff-port-666079 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-666079 │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:36 UTC │
	│ image   │ embed-certs-173264 image list --format=json                                                                                                                                                                                                   │ embed-certs-173264           │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
	│ pause   │ -p embed-certs-173264 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-173264           │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │                     │
	│ delete  │ -p embed-certs-173264                                                                                                                                                                                                                         │ embed-certs-173264           │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
	│ delete  │ -p embed-certs-173264                                                                                                                                                                                                                         │ embed-certs-173264           │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
	│ start   │ -p newest-cni-052144 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-052144            │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:36 UTC │
	│ addons  │ enable metrics-server -p newest-cni-052144 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-052144            │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │                     │
	│ stop    │ -p newest-cni-052144 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-052144            │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │ 25 Oct 25 09:36 UTC │
	│ addons  │ enable dashboard -p newest-cni-052144 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-052144            │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │ 25 Oct 25 09:36 UTC │
	│ start   │ -p newest-cni-052144 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-052144            │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │ 25 Oct 25 09:36 UTC │
	│ image   │ newest-cni-052144 image list --format=json                                                                                                                                                                                                    │ newest-cni-052144            │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │ 25 Oct 25 09:36 UTC │
	│ pause   │ -p newest-cni-052144 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-052144            │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-666079 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-666079 │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-666079 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-666079 │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │ 25 Oct 25 09:37 UTC │
	│ delete  │ -p newest-cni-052144                                                                                                                                                                                                                          │ newest-cni-052144            │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │ 25 Oct 25 09:36 UTC │
	│ delete  │ -p newest-cni-052144                                                                                                                                                                                                                          │ newest-cni-052144            │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │ 25 Oct 25 09:36 UTC │
	│ start   │ -p auto-068349 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-068349                  │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-666079 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-666079 │ jenkins │ v1.37.0 │ 25 Oct 25 09:37 UTC │ 25 Oct 25 09:37 UTC │
	│ start   │ -p default-k8s-diff-port-666079 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-666079 │ jenkins │ v1.37.0 │ 25 Oct 25 09:37 UTC │ 25 Oct 25 09:38 UTC │
	│ image   │ default-k8s-diff-port-666079 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-666079 │ jenkins │ v1.37.0 │ 25 Oct 25 09:38 UTC │ 25 Oct 25 09:38 UTC │
	│ pause   │ -p default-k8s-diff-port-666079 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-666079 │ jenkins │ v1.37.0 │ 25 Oct 25 09:38 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:37:05
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:37:05.556347  216293 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:37:05.556571  216293 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:37:05.556599  216293 out.go:374] Setting ErrFile to fd 2...
	I1025 09:37:05.556619  216293 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:37:05.556917  216293 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-2312/.minikube/bin
	I1025 09:37:05.557404  216293 out.go:368] Setting JSON to false
	I1025 09:37:05.558398  216293 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4777,"bootTime":1761380249,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1025 09:37:05.558493  216293 start.go:141] virtualization:  
	I1025 09:37:05.562025  216293 out.go:179] * [default-k8s-diff-port-666079] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 09:37:05.564947  216293 out.go:179]   - MINIKUBE_LOCATION=21796
	I1025 09:37:05.565046  216293 notify.go:220] Checking for updates...
	I1025 09:37:05.570836  216293 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:37:05.573862  216293 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21796-2312/kubeconfig
	I1025 09:37:05.576884  216293 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-2312/.minikube
	I1025 09:37:05.580271  216293 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 09:37:05.583177  216293 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:37:05.586655  216293 config.go:182] Loaded profile config "default-k8s-diff-port-666079": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:37:05.587333  216293 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:37:05.631866  216293 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 09:37:05.631983  216293 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:37:05.735249  216293 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-25 09:37:05.725368962 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:37:05.735356  216293 docker.go:318] overlay module found
	I1025 09:37:05.738628  216293 out.go:179] * Using the docker driver based on existing profile
	I1025 09:37:05.741466  216293 start.go:305] selected driver: docker
	I1025 09:37:05.741486  216293 start.go:925] validating driver "docker" against &{Name:default-k8s-diff-port-666079 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-666079 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:37:05.741593  216293 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:37:05.742559  216293 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:37:05.834799  216293 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-25 09:37:05.81995224 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:37:05.835128  216293 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:37:05.835160  216293 cni.go:84] Creating CNI manager for ""
	I1025 09:37:05.835217  216293 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:37:05.835255  216293 start.go:349] cluster config:
	{Name:default-k8s-diff-port-666079 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-666079 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:37:05.838712  216293 out.go:179] * Starting "default-k8s-diff-port-666079" primary control-plane node in "default-k8s-diff-port-666079" cluster
	I1025 09:37:05.841667  216293 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 09:37:05.844747  216293 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:37:05.847618  216293 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:37:05.847676  216293 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21796-2312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1025 09:37:05.847688  216293 cache.go:58] Caching tarball of preloaded images
	I1025 09:37:05.847774  216293 preload.go:233] Found /home/jenkins/minikube-integration/21796-2312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 09:37:05.847788  216293 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 09:37:05.847909  216293 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/config.json ...
	I1025 09:37:05.848137  216293 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:37:05.874875  216293 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 09:37:05.874895  216293 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 09:37:05.874908  216293 cache.go:232] Successfully downloaded all kic artifacts
	I1025 09:37:05.874930  216293 start.go:360] acquireMachinesLock for default-k8s-diff-port-666079: {Name:mk25f9f0a43388f7cdd9c3ecfcc6756ef82b00a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:37:05.874993  216293 start.go:364] duration metric: took 35.808µs to acquireMachinesLock for "default-k8s-diff-port-666079"
	I1025 09:37:05.875019  216293 start.go:96] Skipping create...Using existing machine configuration
	I1025 09:37:05.875026  216293 fix.go:54] fixHost starting: 
	I1025 09:37:05.875295  216293 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-666079 --format={{.State.Status}}
	I1025 09:37:05.897812  216293 fix.go:112] recreateIfNeeded on default-k8s-diff-port-666079: state=Stopped err=<nil>
	W1025 09:37:05.897842  216293 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 09:37:01.998401  214786 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21796-2312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-068349:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.436037396s)
	I1025 09:37:01.998435  214786 kic.go:203] duration metric: took 4.436168465s to extract preloaded images to volume ...
	W1025 09:37:01.998576  214786 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1025 09:37:01.998689  214786 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 09:37:02.055141  214786 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-068349 --name auto-068349 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-068349 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-068349 --network auto-068349 --ip 192.168.85.2 --volume auto-068349:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1025 09:37:02.366015  214786 cli_runner.go:164] Run: docker container inspect auto-068349 --format={{.State.Running}}
	I1025 09:37:02.386935  214786 cli_runner.go:164] Run: docker container inspect auto-068349 --format={{.State.Status}}
	I1025 09:37:02.406857  214786 cli_runner.go:164] Run: docker exec auto-068349 stat /var/lib/dpkg/alternatives/iptables
	I1025 09:37:02.459584  214786 oci.go:144] the created container "auto-068349" has a running status.
	I1025 09:37:02.459620  214786 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21796-2312/.minikube/machines/auto-068349/id_rsa...
	I1025 09:37:03.270702  214786 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21796-2312/.minikube/machines/auto-068349/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 09:37:03.289898  214786 cli_runner.go:164] Run: docker container inspect auto-068349 --format={{.State.Status}}
	I1025 09:37:03.308090  214786 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 09:37:03.308108  214786 kic_runner.go:114] Args: [docker exec --privileged auto-068349 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 09:37:03.351199  214786 cli_runner.go:164] Run: docker container inspect auto-068349 --format={{.State.Status}}
	I1025 09:37:03.368959  214786 machine.go:93] provisionDockerMachine start ...
	I1025 09:37:03.369059  214786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-068349
	I1025 09:37:03.387604  214786 main.go:141] libmachine: Using SSH client type: native
	I1025 09:37:03.387933  214786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1025 09:37:03.387948  214786 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:37:03.550506  214786 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-068349
	
	I1025 09:37:03.550531  214786 ubuntu.go:182] provisioning hostname "auto-068349"
	I1025 09:37:03.550643  214786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-068349
	I1025 09:37:03.572248  214786 main.go:141] libmachine: Using SSH client type: native
	I1025 09:37:03.572554  214786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1025 09:37:03.572579  214786 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-068349 && echo "auto-068349" | sudo tee /etc/hostname
	I1025 09:37:03.731544  214786 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-068349
	
	I1025 09:37:03.731680  214786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-068349
	I1025 09:37:03.748737  214786 main.go:141] libmachine: Using SSH client type: native
	I1025 09:37:03.749077  214786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1025 09:37:03.749102  214786 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-068349' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-068349/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-068349' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:37:03.898133  214786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:37:03.898158  214786 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21796-2312/.minikube CaCertPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21796-2312/.minikube}
	I1025 09:37:03.898178  214786 ubuntu.go:190] setting up certificates
	I1025 09:37:03.898187  214786 provision.go:84] configureAuth start
	I1025 09:37:03.898246  214786 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-068349
	I1025 09:37:03.917918  214786 provision.go:143] copyHostCerts
	I1025 09:37:03.918135  214786 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-2312/.minikube/ca.pem, removing ...
	I1025 09:37:03.918152  214786 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-2312/.minikube/ca.pem
	I1025 09:37:03.918230  214786 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21796-2312/.minikube/ca.pem (1082 bytes)
	I1025 09:37:03.918357  214786 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-2312/.minikube/cert.pem, removing ...
	I1025 09:37:03.918369  214786 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-2312/.minikube/cert.pem
	I1025 09:37:03.918400  214786 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21796-2312/.minikube/cert.pem (1123 bytes)
	I1025 09:37:03.918457  214786 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-2312/.minikube/key.pem, removing ...
	I1025 09:37:03.918467  214786 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-2312/.minikube/key.pem
	I1025 09:37:03.918493  214786 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21796-2312/.minikube/key.pem (1679 bytes)
	I1025 09:37:03.918543  214786 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21796-2312/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca-key.pem org=jenkins.auto-068349 san=[127.0.0.1 192.168.85.2 auto-068349 localhost minikube]
	I1025 09:37:04.337492  214786 provision.go:177] copyRemoteCerts
	I1025 09:37:04.337563  214786 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:37:04.337610  214786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-068349
	I1025 09:37:04.354350  214786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/auto-068349/id_rsa Username:docker}
	I1025 09:37:04.457776  214786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 09:37:04.474991  214786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1025 09:37:04.492594  214786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 09:37:04.509630  214786 provision.go:87] duration metric: took 611.419706ms to configureAuth
	I1025 09:37:04.509656  214786 ubuntu.go:206] setting minikube options for container-runtime
	I1025 09:37:04.509844  214786 config.go:182] Loaded profile config "auto-068349": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:37:04.509951  214786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-068349
	I1025 09:37:04.527260  214786 main.go:141] libmachine: Using SSH client type: native
	I1025 09:37:04.527568  214786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1025 09:37:04.527588  214786 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 09:37:04.789197  214786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:37:04.789220  214786 machine.go:96] duration metric: took 1.420239246s to provisionDockerMachine
	I1025 09:37:04.789231  214786 client.go:171] duration metric: took 7.864205677s to LocalClient.Create
	I1025 09:37:04.789249  214786 start.go:167] duration metric: took 7.864276652s to libmachine.API.Create "auto-068349"
	I1025 09:37:04.789256  214786 start.go:293] postStartSetup for "auto-068349" (driver="docker")
	I1025 09:37:04.789266  214786 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:37:04.789328  214786 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:37:04.789378  214786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-068349
	I1025 09:37:04.807853  214786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/auto-068349/id_rsa Username:docker}
	I1025 09:37:04.918293  214786 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:37:04.924273  214786 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 09:37:04.924356  214786 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 09:37:04.924382  214786 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-2312/.minikube/addons for local assets ...
	I1025 09:37:04.924467  214786 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-2312/.minikube/files for local assets ...
	I1025 09:37:04.924587  214786 filesync.go:149] local asset: /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem -> 41102.pem in /etc/ssl/certs
	I1025 09:37:04.924737  214786 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 09:37:04.935394  214786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem --> /etc/ssl/certs/41102.pem (1708 bytes)
	I1025 09:37:04.959489  214786 start.go:296] duration metric: took 170.219238ms for postStartSetup
	I1025 09:37:04.959841  214786 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-068349
	I1025 09:37:04.983380  214786 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/config.json ...
	I1025 09:37:04.983790  214786 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:37:04.983879  214786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-068349
	I1025 09:37:05.006808  214786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/auto-068349/id_rsa Username:docker}
	I1025 09:37:05.119036  214786 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 09:37:05.125621  214786 start.go:128] duration metric: took 8.204166061s to createHost
	I1025 09:37:05.125645  214786 start.go:83] releasing machines lock for "auto-068349", held for 8.204297623s
	I1025 09:37:05.125717  214786 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-068349
	I1025 09:37:05.149090  214786 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:37:05.149090  214786 ssh_runner.go:195] Run: cat /version.json
	I1025 09:37:05.149169  214786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-068349
	I1025 09:37:05.149191  214786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-068349
	I1025 09:37:05.181465  214786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/auto-068349/id_rsa Username:docker}
	I1025 09:37:05.186129  214786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/auto-068349/id_rsa Username:docker}
	I1025 09:37:05.322224  214786 ssh_runner.go:195] Run: systemctl --version
	I1025 09:37:05.424747  214786 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:37:05.477612  214786 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:37:05.482502  214786 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:37:05.482569  214786 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:37:05.524275  214786 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1025 09:37:05.524299  214786 start.go:495] detecting cgroup driver to use...
	I1025 09:37:05.524329  214786 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 09:37:05.524379  214786 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:37:05.546799  214786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:37:05.560575  214786 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:37:05.560634  214786 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:37:05.582509  214786 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:37:05.602101  214786 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:37:05.780302  214786 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:37:05.962364  214786 docker.go:234] disabling docker service ...
	I1025 09:37:05.962428  214786 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:37:05.991497  214786 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:37:06.029024  214786 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:37:06.186264  214786 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:37:06.375698  214786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:37:06.389558  214786 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:37:06.404437  214786 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 09:37:06.404504  214786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:37:06.413271  214786 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 09:37:06.413338  214786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:37:06.422236  214786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:37:06.431640  214786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:37:06.440541  214786 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:37:06.449169  214786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:37:06.458477  214786 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:37:06.471835  214786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:37:06.480561  214786 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:37:06.493559  214786 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:37:06.507213  214786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:37:06.676781  214786 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:37:06.878678  214786 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:37:06.878748  214786 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:37:06.882913  214786 start.go:563] Will wait 60s for crictl version
	I1025 09:37:06.882986  214786 ssh_runner.go:195] Run: which crictl
	I1025 09:37:06.886363  214786 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 09:37:06.916783  214786 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 09:37:06.916871  214786 ssh_runner.go:195] Run: crio --version
	I1025 09:37:06.957202  214786 ssh_runner.go:195] Run: crio --version
	I1025 09:37:06.998836  214786 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 09:37:07.002243  214786 cli_runner.go:164] Run: docker network inspect auto-068349 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:37:07.028098  214786 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1025 09:37:07.032483  214786 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:37:07.042887  214786 kubeadm.go:883] updating cluster {Name:auto-068349 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-068349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 09:37:07.042998  214786 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:37:07.043061  214786 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:37:07.079203  214786 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:37:07.079230  214786 crio.go:433] Images already preloaded, skipping extraction
	I1025 09:37:07.079286  214786 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:37:07.105079  214786 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:37:07.105105  214786 cache_images.go:85] Images are preloaded, skipping loading
	I1025 09:37:07.105112  214786 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1025 09:37:07.105195  214786 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-068349 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-068349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 09:37:07.105274  214786 ssh_runner.go:195] Run: crio config
	I1025 09:37:07.165651  214786 cni.go:84] Creating CNI manager for ""
	I1025 09:37:07.165673  214786 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:37:07.165696  214786 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 09:37:07.165719  214786 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-068349 NodeName:auto-068349 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:37:07.165847  214786 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-068349"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:37:07.165921  214786 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 09:37:07.175359  214786 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 09:37:07.175422  214786 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:37:07.182794  214786 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1025 09:37:07.195797  214786 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:37:07.208297  214786 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1025 09:37:07.221005  214786 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1025 09:37:07.224415  214786 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:37:07.234163  214786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:37:07.346657  214786 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:37:07.363018  214786 certs.go:69] Setting up /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349 for IP: 192.168.85.2
	I1025 09:37:07.363043  214786 certs.go:195] generating shared ca certs ...
	I1025 09:37:07.363059  214786 certs.go:227] acquiring lock for ca certs: {Name:mke81a91ad41248e67f7acba9fb2e71e1c110e7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:37:07.363244  214786 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21796-2312/.minikube/ca.key
	I1025 09:37:07.363307  214786 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.key
	I1025 09:37:07.363323  214786 certs.go:257] generating profile certs ...
	I1025 09:37:07.363394  214786 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/client.key
	I1025 09:37:07.363412  214786 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/client.crt with IP's: []
	I1025 09:37:08.118354  214786 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/client.crt ...
	I1025 09:37:08.118386  214786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/client.crt: {Name:mkf3844481f6d137a348604bf759496511bf005d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:37:08.118576  214786 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/client.key ...
	I1025 09:37:08.118590  214786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/client.key: {Name:mk1abe8e96987e07432d85c390d8053756e56039 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:37:08.118676  214786 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/apiserver.key.a6755b95
	I1025 09:37:08.118695  214786 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/apiserver.crt.a6755b95 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1025 09:37:08.389428  214786 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/apiserver.crt.a6755b95 ...
	I1025 09:37:08.389459  214786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/apiserver.crt.a6755b95: {Name:mk639d44540e200e77566e7443b2a7036ce15252 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:37:08.389669  214786 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/apiserver.key.a6755b95 ...
	I1025 09:37:08.389687  214786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/apiserver.key.a6755b95: {Name:mk82258b28cbf222f1bea0faa41cf6fb2fd0b04d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:37:08.389778  214786 certs.go:382] copying /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/apiserver.crt.a6755b95 -> /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/apiserver.crt
	I1025 09:37:08.389862  214786 certs.go:386] copying /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/apiserver.key.a6755b95 -> /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/apiserver.key
	I1025 09:37:08.389925  214786 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/proxy-client.key
	I1025 09:37:08.389943  214786 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/proxy-client.crt with IP's: []
	I1025 09:37:09.226693  214786 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/proxy-client.crt ...
	I1025 09:37:09.226725  214786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/proxy-client.crt: {Name:mk5eb8d32a06c15e8d3db32c80e10bda280eeef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:37:09.226912  214786 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/proxy-client.key ...
	I1025 09:37:09.226925  214786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/proxy-client.key: {Name:mk08771d4ef8be7e5e66fe2543d759828932c2c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:37:09.227121  214786 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/4110.pem (1338 bytes)
	W1025 09:37:09.227163  214786 certs.go:480] ignoring /home/jenkins/minikube-integration/21796-2312/.minikube/certs/4110_empty.pem, impossibly tiny 0 bytes
	I1025 09:37:09.227176  214786 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 09:37:09.227200  214786 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem (1082 bytes)
	I1025 09:37:09.227230  214786 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:37:09.227253  214786 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/key.pem (1679 bytes)
	I1025 09:37:09.227300  214786 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem (1708 bytes)
	I1025 09:37:09.227895  214786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:37:09.246105  214786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 09:37:09.264798  214786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:37:09.284256  214786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 09:37:09.302186  214786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1025 09:37:09.328562  214786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 09:37:09.350260  214786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:37:09.372588  214786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 09:37:09.394670  214786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:37:09.416597  214786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/certs/4110.pem --> /usr/share/ca-certificates/4110.pem (1338 bytes)
	I1025 09:37:09.438597  214786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem --> /usr/share/ca-certificates/41102.pem (1708 bytes)
	I1025 09:37:09.457637  214786 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:37:09.474322  214786 ssh_runner.go:195] Run: openssl version
	I1025 09:37:09.482773  214786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:37:09.493261  214786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:37:09.497595  214786 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:37:09.497658  214786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:37:09.545330  214786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:37:09.554743  214786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4110.pem && ln -fs /usr/share/ca-certificates/4110.pem /etc/ssl/certs/4110.pem"
	I1025 09:37:09.564417  214786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4110.pem
	I1025 09:37:09.569037  214786 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 08:37 /usr/share/ca-certificates/4110.pem
	I1025 09:37:09.569179  214786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4110.pem
	I1025 09:37:09.611550  214786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4110.pem /etc/ssl/certs/51391683.0"
	I1025 09:37:09.619733  214786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41102.pem && ln -fs /usr/share/ca-certificates/41102.pem /etc/ssl/certs/41102.pem"
	I1025 09:37:09.627830  214786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41102.pem
	I1025 09:37:09.631508  214786 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 08:37 /usr/share/ca-certificates/41102.pem
	I1025 09:37:09.631588  214786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41102.pem
	I1025 09:37:09.688831  214786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41102.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 09:37:09.709544  214786 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:37:09.714354  214786 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 09:37:09.714404  214786 kubeadm.go:400] StartCluster: {Name:auto-068349 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-068349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:37:09.714478  214786 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:37:09.714541  214786 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:37:09.756168  214786 cri.go:89] found id: ""
	I1025 09:37:09.756241  214786 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 09:37:09.773781  214786 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 09:37:09.783182  214786 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1025 09:37:09.783243  214786 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 09:37:09.792606  214786 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 09:37:09.792623  214786 kubeadm.go:157] found existing configuration files:
	
	I1025 09:37:09.792674  214786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 09:37:09.805260  214786 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 09:37:09.805319  214786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 09:37:09.812827  214786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 09:37:09.824611  214786 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 09:37:09.824735  214786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 09:37:09.831870  214786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 09:37:09.840201  214786 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 09:37:09.840265  214786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 09:37:09.847533  214786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 09:37:09.855010  214786 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 09:37:09.855074  214786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 09:37:09.862251  214786 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 09:37:09.916967  214786 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1025 09:37:09.917322  214786 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 09:37:09.954712  214786 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1025 09:37:09.954993  214786 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1025 09:37:09.955039  214786 kubeadm.go:318] OS: Linux
	I1025 09:37:09.955092  214786 kubeadm.go:318] CGROUPS_CPU: enabled
	I1025 09:37:09.955146  214786 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1025 09:37:09.955199  214786 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1025 09:37:09.955252  214786 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1025 09:37:09.955306  214786 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1025 09:37:09.955374  214786 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1025 09:37:09.955425  214786 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1025 09:37:09.955478  214786 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1025 09:37:09.955530  214786 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1025 09:37:10.039898  214786 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 09:37:10.040020  214786 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 09:37:10.040120  214786 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 09:37:10.054523  214786 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 09:37:05.901113  216293 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-666079" ...
	I1025 09:37:05.901193  216293 cli_runner.go:164] Run: docker start default-k8s-diff-port-666079
	I1025 09:37:06.220400  216293 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-666079 --format={{.State.Status}}
	I1025 09:37:06.238958  216293 kic.go:430] container "default-k8s-diff-port-666079" state is running.
	I1025 09:37:06.239351  216293 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-666079
	I1025 09:37:06.268847  216293 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/config.json ...
	I1025 09:37:06.269082  216293 machine.go:93] provisionDockerMachine start ...
	I1025 09:37:06.269143  216293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-666079
	I1025 09:37:06.298473  216293 main.go:141] libmachine: Using SSH client type: native
	I1025 09:37:06.298786  216293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1025 09:37:06.298795  216293 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:37:06.301314  216293 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1025 09:37:09.462103  216293 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-666079
	
	I1025 09:37:09.462183  216293 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-666079"
	I1025 09:37:09.462321  216293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-666079
	I1025 09:37:09.487256  216293 main.go:141] libmachine: Using SSH client type: native
	I1025 09:37:09.487565  216293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1025 09:37:09.487577  216293 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-666079 && echo "default-k8s-diff-port-666079" | sudo tee /etc/hostname
	I1025 09:37:09.660089  216293 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-666079
	
	I1025 09:37:09.660196  216293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-666079
	I1025 09:37:09.690703  216293 main.go:141] libmachine: Using SSH client type: native
	I1025 09:37:09.691010  216293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1025 09:37:09.691034  216293 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-666079' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-666079/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-666079' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:37:09.870562  216293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:37:09.870591  216293 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21796-2312/.minikube CaCertPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21796-2312/.minikube}
	I1025 09:37:09.870621  216293 ubuntu.go:190] setting up certificates
	I1025 09:37:09.870631  216293 provision.go:84] configureAuth start
	I1025 09:37:09.870691  216293 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-666079
	I1025 09:37:09.895823  216293 provision.go:143] copyHostCerts
	I1025 09:37:09.895892  216293 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-2312/.minikube/ca.pem, removing ...
	I1025 09:37:09.895912  216293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-2312/.minikube/ca.pem
	I1025 09:37:09.895995  216293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21796-2312/.minikube/ca.pem (1082 bytes)
	I1025 09:37:09.896102  216293 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-2312/.minikube/cert.pem, removing ...
	I1025 09:37:09.896111  216293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-2312/.minikube/cert.pem
	I1025 09:37:09.896137  216293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21796-2312/.minikube/cert.pem (1123 bytes)
	I1025 09:37:09.896195  216293 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-2312/.minikube/key.pem, removing ...
	I1025 09:37:09.896203  216293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-2312/.minikube/key.pem
	I1025 09:37:09.896228  216293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21796-2312/.minikube/key.pem (1679 bytes)
	I1025 09:37:09.896279  216293 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21796-2312/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-666079 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-666079 localhost minikube]
	I1025 09:37:10.478822  216293 provision.go:177] copyRemoteCerts
	I1025 09:37:10.478942  216293 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:37:10.479048  216293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-666079
	I1025 09:37:10.497083  216293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/default-k8s-diff-port-666079/id_rsa Username:docker}
	I1025 09:37:10.061200  214786 out.go:252]   - Generating certificates and keys ...
	I1025 09:37:10.061293  214786 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 09:37:10.061374  214786 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 09:37:10.986145  214786 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 09:37:11.195317  214786 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 09:37:10.610348  216293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 09:37:10.631714  216293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1025 09:37:10.650875  216293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 09:37:10.669798  216293 provision.go:87] duration metric: took 799.137725ms to configureAuth
	I1025 09:37:10.669871  216293 ubuntu.go:206] setting minikube options for container-runtime
	I1025 09:37:10.670130  216293 config.go:182] Loaded profile config "default-k8s-diff-port-666079": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:37:10.670279  216293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-666079
	I1025 09:37:10.688741  216293 main.go:141] libmachine: Using SSH client type: native
	I1025 09:37:10.689057  216293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1025 09:37:10.689072  216293 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 09:37:11.023677  216293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:37:11.023766  216293 machine.go:96] duration metric: took 4.754675305s to provisionDockerMachine
	I1025 09:37:11.023791  216293 start.go:293] postStartSetup for "default-k8s-diff-port-666079" (driver="docker")
	I1025 09:37:11.023817  216293 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:37:11.023947  216293 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:37:11.024030  216293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-666079
	I1025 09:37:11.043579  216293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/default-k8s-diff-port-666079/id_rsa Username:docker}
	I1025 09:37:11.159762  216293 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:37:11.164153  216293 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 09:37:11.164226  216293 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 09:37:11.164254  216293 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-2312/.minikube/addons for local assets ...
	I1025 09:37:11.164346  216293 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-2312/.minikube/files for local assets ...
	I1025 09:37:11.164472  216293 filesync.go:149] local asset: /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem -> 41102.pem in /etc/ssl/certs
	I1025 09:37:11.164637  216293 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 09:37:11.173342  216293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem --> /etc/ssl/certs/41102.pem (1708 bytes)
	I1025 09:37:11.194236  216293 start.go:296] duration metric: took 170.416469ms for postStartSetup
	I1025 09:37:11.194359  216293 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:37:11.194429  216293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-666079
	I1025 09:37:11.219034  216293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/default-k8s-diff-port-666079/id_rsa Username:docker}
	I1025 09:37:11.328574  216293 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 09:37:11.334368  216293 fix.go:56] duration metric: took 5.459334092s for fixHost
	I1025 09:37:11.334402  216293 start.go:83] releasing machines lock for "default-k8s-diff-port-666079", held for 5.45939335s
	I1025 09:37:11.334482  216293 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-666079
	I1025 09:37:11.361690  216293 ssh_runner.go:195] Run: cat /version.json
	I1025 09:37:11.361761  216293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-666079
	I1025 09:37:11.362032  216293 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:37:11.362134  216293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-666079
	I1025 09:37:11.403456  216293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/default-k8s-diff-port-666079/id_rsa Username:docker}
	I1025 09:37:11.405604  216293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/default-k8s-diff-port-666079/id_rsa Username:docker}
	I1025 09:37:11.604502  216293 ssh_runner.go:195] Run: systemctl --version
	I1025 09:37:11.613198  216293 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:37:11.666871  216293 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:37:11.673336  216293 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:37:11.673440  216293 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:37:11.684646  216293 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 09:37:11.684691  216293 start.go:495] detecting cgroup driver to use...
	I1025 09:37:11.684754  216293 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 09:37:11.684817  216293 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:37:11.704647  216293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:37:11.723195  216293 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:37:11.723286  216293 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:37:11.744196  216293 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:37:11.761759  216293 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:37:11.910596  216293 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:37:12.078286  216293 docker.go:234] disabling docker service ...
	I1025 09:37:12.078366  216293 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:37:12.099815  216293 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:37:12.114772  216293 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:37:12.277359  216293 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:37:12.433182  216293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:37:12.451291  216293 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:37:12.467778  216293 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 09:37:12.467870  216293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:37:12.477619  216293 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 09:37:12.477696  216293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:37:12.488289  216293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:37:12.498792  216293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:37:12.510131  216293 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:37:12.520653  216293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:37:12.531799  216293 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:37:12.542026  216293 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:37:12.552254  216293 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:37:12.560594  216293 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:37:12.569486  216293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:37:12.752285  216293 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:37:12.898781  216293 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:37:12.898874  216293 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:37:12.903539  216293 start.go:563] Will wait 60s for crictl version
	I1025 09:37:12.903638  216293 ssh_runner.go:195] Run: which crictl
	I1025 09:37:12.907869  216293 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 09:37:12.938916  216293 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 09:37:12.939036  216293 ssh_runner.go:195] Run: crio --version
	I1025 09:37:12.969908  216293 ssh_runner.go:195] Run: crio --version
	I1025 09:37:13.009399  216293 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 09:37:13.012380  216293 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-666079 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:37:13.031651  216293 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1025 09:37:13.036092  216293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:37:13.045784  216293 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-666079 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-666079 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 09:37:13.045897  216293 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:37:13.045962  216293 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:37:13.082477  216293 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:37:13.082505  216293 crio.go:433] Images already preloaded, skipping extraction
	I1025 09:37:13.082559  216293 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:37:13.120320  216293 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:37:13.120346  216293 cache_images.go:85] Images are preloaded, skipping loading
	I1025 09:37:13.120355  216293 kubeadm.go:934] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1025 09:37:13.120447  216293 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-666079 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-666079 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 09:37:13.120520  216293 ssh_runner.go:195] Run: crio config
	I1025 09:37:13.232148  216293 cni.go:84] Creating CNI manager for ""
	I1025 09:37:13.232176  216293 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:37:13.232207  216293 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 09:37:13.232240  216293 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-666079 NodeName:default-k8s-diff-port-666079 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:37:13.232412  216293 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-666079"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:37:13.232513  216293 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 09:37:13.241266  216293 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 09:37:13.241351  216293 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:37:13.251310  216293 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1025 09:37:13.264094  216293 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:37:13.276611  216293 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1025 09:37:13.289240  216293 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1025 09:37:13.292811  216293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:37:13.301999  216293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:37:13.448898  216293 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:37:13.465098  216293 certs.go:69] Setting up /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079 for IP: 192.168.76.2
	I1025 09:37:13.465175  216293 certs.go:195] generating shared ca certs ...
	I1025 09:37:13.465208  216293 certs.go:227] acquiring lock for ca certs: {Name:mke81a91ad41248e67f7acba9fb2e71e1c110e7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:37:13.465395  216293 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21796-2312/.minikube/ca.key
	I1025 09:37:13.465462  216293 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.key
	I1025 09:37:13.465485  216293 certs.go:257] generating profile certs ...
	I1025 09:37:13.465617  216293 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/client.key
	I1025 09:37:13.465722  216293 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/apiserver.key.f342de6b
	I1025 09:37:13.465786  216293 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/proxy-client.key
	I1025 09:37:13.465937  216293 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/4110.pem (1338 bytes)
	W1025 09:37:13.466015  216293 certs.go:480] ignoring /home/jenkins/minikube-integration/21796-2312/.minikube/certs/4110_empty.pem, impossibly tiny 0 bytes
	I1025 09:37:13.466045  216293 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 09:37:13.466091  216293 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem (1082 bytes)
	I1025 09:37:13.466147  216293 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:37:13.466193  216293 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/key.pem (1679 bytes)
	I1025 09:37:13.466276  216293 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem (1708 bytes)
	I1025 09:37:13.466914  216293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:37:13.527190  216293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 09:37:13.569557  216293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:37:13.588582  216293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 09:37:13.607646  216293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1025 09:37:13.626657  216293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 09:37:13.647731  216293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:37:13.672826  216293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 09:37:13.698140  216293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:37:13.797649  216293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/certs/4110.pem --> /usr/share/ca-certificates/4110.pem (1338 bytes)
	I1025 09:37:13.844132  216293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem --> /usr/share/ca-certificates/41102.pem (1708 bytes)
	I1025 09:37:13.864651  216293 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:37:13.878204  216293 ssh_runner.go:195] Run: openssl version
	I1025 09:37:13.885205  216293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:37:13.893696  216293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:37:13.898094  216293 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:37:13.898161  216293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:37:13.939701  216293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:37:13.947652  216293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4110.pem && ln -fs /usr/share/ca-certificates/4110.pem /etc/ssl/certs/4110.pem"
	I1025 09:37:13.956023  216293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4110.pem
	I1025 09:37:13.960596  216293 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 08:37 /usr/share/ca-certificates/4110.pem
	I1025 09:37:13.960677  216293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4110.pem
	I1025 09:37:14.004399  216293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4110.pem /etc/ssl/certs/51391683.0"
	I1025 09:37:14.013765  216293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41102.pem && ln -fs /usr/share/ca-certificates/41102.pem /etc/ssl/certs/41102.pem"
	I1025 09:37:14.023028  216293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41102.pem
	I1025 09:37:14.027658  216293 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 08:37 /usr/share/ca-certificates/41102.pem
	I1025 09:37:14.027735  216293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41102.pem
	I1025 09:37:14.070778  216293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41102.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 09:37:14.079303  216293 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:37:14.084005  216293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 09:37:14.126843  216293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 09:37:14.168425  216293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 09:37:14.211765  216293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 09:37:14.276559  216293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 09:37:14.354252  216293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 09:37:14.460340  216293 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-666079 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-666079 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:37:14.460443  216293 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:37:14.460527  216293 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:37:14.604929  216293 cri.go:89] found id: "c26bf38fd7e4b9f51947f954e4ee102888ffd02a198adb203972580c4eb3c74d"
	I1025 09:37:14.604966  216293 cri.go:89] found id: "93c1d103bf05eb8996db42684ab453c3e8a59e4287467d1fb344225e54155651"
	I1025 09:37:14.604972  216293 cri.go:89] found id: "fe95bac5f1e76131716e125587dd727d7db7bdabeed57b1078cc75158bc0da09"
	I1025 09:37:14.604985  216293 cri.go:89] found id: "36dbd5d0fba8fd463698b1cfb95820a97032c9e08ee3218bc4e23d5db821fa62"
	I1025 09:37:14.604988  216293 cri.go:89] found id: ""
	I1025 09:37:14.605053  216293 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 09:37:14.625213  216293 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:37:14Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:37:14.625301  216293 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 09:37:14.670352  216293 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 09:37:14.670374  216293 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 09:37:14.670433  216293 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 09:37:14.702662  216293 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 09:37:14.703129  216293 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-666079" does not appear in /home/jenkins/minikube-integration/21796-2312/kubeconfig
	I1025 09:37:14.703260  216293 kubeconfig.go:62] /home/jenkins/minikube-integration/21796-2312/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-666079" cluster setting kubeconfig missing "default-k8s-diff-port-666079" context setting]
	I1025 09:37:14.703591  216293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/kubeconfig: {Name:mk29749a0edbb941c188588ea6aef25a517ab571 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:37:14.705195  216293 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 09:37:14.723722  216293 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1025 09:37:14.723766  216293 kubeadm.go:601] duration metric: took 53.38553ms to restartPrimaryControlPlane
	I1025 09:37:14.723775  216293 kubeadm.go:402] duration metric: took 263.459372ms to StartCluster
	I1025 09:37:14.723789  216293 settings.go:142] acquiring lock: {Name:mk0d8a31751f5e359928da7d7271367bd4b397fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:37:14.723864  216293 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21796-2312/kubeconfig
	I1025 09:37:14.724539  216293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/kubeconfig: {Name:mk29749a0edbb941c188588ea6aef25a517ab571 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:37:14.724768  216293 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:37:14.725144  216293 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 09:37:14.725223  216293 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-666079"
	I1025 09:37:14.725243  216293 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-666079"
	W1025 09:37:14.725255  216293 addons.go:247] addon storage-provisioner should already be in state true
	I1025 09:37:14.725276  216293 host.go:66] Checking if "default-k8s-diff-port-666079" exists ...
	I1025 09:37:14.725729  216293 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-666079 --format={{.State.Status}}
	I1025 09:37:14.726152  216293 config.go:182] Loaded profile config "default-k8s-diff-port-666079": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:37:14.726221  216293 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-666079"
	I1025 09:37:14.726239  216293 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-666079"
	I1025 09:37:14.726515  216293 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-666079 --format={{.State.Status}}
	I1025 09:37:14.726668  216293 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-666079"
	I1025 09:37:14.726712  216293 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-666079"
	W1025 09:37:14.726742  216293 addons.go:247] addon dashboard should already be in state true
	I1025 09:37:14.726777  216293 host.go:66] Checking if "default-k8s-diff-port-666079" exists ...
	I1025 09:37:14.727246  216293 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-666079 --format={{.State.Status}}
	I1025 09:37:14.737646  216293 out.go:179] * Verifying Kubernetes components...
	I1025 09:37:14.753453  216293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:37:14.787798  216293 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 09:37:14.790736  216293 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-666079"
	W1025 09:37:14.790814  216293 addons.go:247] addon default-storageclass should already be in state true
	I1025 09:37:14.790846  216293 host.go:66] Checking if "default-k8s-diff-port-666079" exists ...
	I1025 09:37:14.791258  216293 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-666079 --format={{.State.Status}}
	I1025 09:37:14.791436  216293 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:37:14.791449  216293 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 09:37:14.791486  216293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-666079
	I1025 09:37:14.791942  216293 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1025 09:37:14.794996  216293 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1025 09:37:14.797904  216293 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1025 09:37:14.797924  216293 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1025 09:37:14.798105  216293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-666079
	I1025 09:37:14.839619  216293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/default-k8s-diff-port-666079/id_rsa Username:docker}
	I1025 09:37:14.847927  216293 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 09:37:14.847952  216293 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 09:37:14.848010  216293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-666079
	I1025 09:37:14.859606  216293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/default-k8s-diff-port-666079/id_rsa Username:docker}
	I1025 09:37:14.877708  216293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/default-k8s-diff-port-666079/id_rsa Username:docker}
	I1025 09:37:15.275637  216293 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 09:37:15.319240  216293 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:37:15.341709  216293 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:37:15.355571  216293 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1025 09:37:15.355643  216293 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1025 09:37:15.461358  216293 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1025 09:37:15.461429  216293 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1025 09:37:12.254434  214786 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 09:37:12.590267  214786 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 09:37:13.058351  214786 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 09:37:13.058492  214786 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [auto-068349 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1025 09:37:13.755270  214786 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 09:37:13.755816  214786 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [auto-068349 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1025 09:37:15.209347  214786 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 09:37:16.265762  214786 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 09:37:16.613439  214786 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 09:37:16.613914  214786 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 09:37:17.356261  214786 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 09:37:17.996076  214786 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1025 09:37:18.326350  214786 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 09:37:19.552967  214786 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 09:37:20.143960  214786 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 09:37:20.145123  214786 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 09:37:20.148160  214786 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 09:37:15.576548  216293 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1025 09:37:15.576623  216293 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1025 09:37:15.671385  216293 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1025 09:37:15.671460  216293 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1025 09:37:15.739899  216293 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1025 09:37:15.739974  216293 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1025 09:37:15.777663  216293 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1025 09:37:15.777724  216293 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1025 09:37:15.814982  216293 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1025 09:37:15.815057  216293 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1025 09:37:15.856516  216293 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1025 09:37:15.856578  216293 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1025 09:37:15.893678  216293 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 09:37:15.893752  216293 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1025 09:37:15.930405  216293 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 09:37:20.151446  214786 out.go:252]   - Booting up control plane ...
	I1025 09:37:20.151555  214786 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 09:37:20.151637  214786 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 09:37:20.152873  214786 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 09:37:20.198092  214786 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 09:37:20.198205  214786 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1025 09:37:20.210370  214786 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1025 09:37:20.215365  214786 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 09:37:20.215812  214786 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1025 09:37:20.411037  214786 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1025 09:37:20.411161  214786 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1025 09:37:22.136606  216293 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.860938426s)
	I1025 09:37:22.136722  216293 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.817411017s)
	I1025 09:37:22.136800  216293 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-666079" to be "Ready" ...
	I1025 09:37:22.176493  216293 node_ready.go:49] node "default-k8s-diff-port-666079" is "Ready"
	I1025 09:37:22.176574  216293 node_ready.go:38] duration metric: took 39.760169ms for node "default-k8s-diff-port-666079" to be "Ready" ...
	I1025 09:37:22.176603  216293 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:37:22.176698  216293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:37:24.561612  216293 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.219831454s)
	I1025 09:37:24.790650  216293 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.860156831s)
	I1025 09:37:24.790838  216293 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.614108545s)
	I1025 09:37:24.790859  216293 api_server.go:72] duration metric: took 10.066064389s to wait for apiserver process to appear ...
	I1025 09:37:24.790865  216293 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:37:24.790903  216293 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1025 09:37:24.793832  216293 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-666079 addons enable metrics-server
	
	I1025 09:37:24.796624  216293 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1025 09:37:24.799451  216293 addons.go:514] duration metric: took 10.074301486s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1025 09:37:24.807424  216293 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1025 09:37:24.810071  216293 api_server.go:141] control plane version: v1.34.1
	I1025 09:37:24.810098  216293 api_server.go:131] duration metric: took 19.207412ms to wait for apiserver health ...
	I1025 09:37:24.810108  216293 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 09:37:24.832250  216293 system_pods.go:59] 8 kube-system pods found
	I1025 09:37:24.832294  216293 system_pods.go:61] "coredns-66bc5c9577-dzmkq" [991a35a6-4303-41e2-b7f8-3d267c5fc2ec] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:37:24.832309  216293 system_pods.go:61] "etcd-default-k8s-diff-port-666079" [48199b7d-3848-41a8-ac12-69176ce87480] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 09:37:24.832315  216293 system_pods.go:61] "kindnet-28vnv" [7efe42d1-6ccc-4898-8927-11f06d512ee1] Running
	I1025 09:37:24.832322  216293 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-666079" [ca0bef19-f719-4f47-b127-ea12a2faa23d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 09:37:24.832330  216293 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-666079" [c17326de-704e-4ba5-ac46-316407ce00f7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 09:37:24.832342  216293 system_pods.go:61] "kube-proxy-65j7p" [e9d046e5-ee7b-43a7-b854-2597df0f1432] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1025 09:37:24.832349  216293 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-666079" [99eaee03-92e9-47a5-8e65-8c20a7a00558] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 09:37:24.832364  216293 system_pods.go:61] "storage-provisioner" [ded1f77d-7f3d-48f0-94ef-13367b475def] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:37:24.832371  216293 system_pods.go:74] duration metric: took 22.256374ms to wait for pod list to return data ...
	I1025 09:37:24.832384  216293 default_sa.go:34] waiting for default service account to be created ...
	I1025 09:37:24.852452  216293 default_sa.go:45] found service account: "default"
	I1025 09:37:24.852481  216293 default_sa.go:55] duration metric: took 20.091106ms for default service account to be created ...
	I1025 09:37:24.852491  216293 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 09:37:24.931080  216293 system_pods.go:86] 8 kube-system pods found
	I1025 09:37:24.931125  216293 system_pods.go:89] "coredns-66bc5c9577-dzmkq" [991a35a6-4303-41e2-b7f8-3d267c5fc2ec] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:37:24.931136  216293 system_pods.go:89] "etcd-default-k8s-diff-port-666079" [48199b7d-3848-41a8-ac12-69176ce87480] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 09:37:24.931142  216293 system_pods.go:89] "kindnet-28vnv" [7efe42d1-6ccc-4898-8927-11f06d512ee1] Running
	I1025 09:37:24.931155  216293 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-666079" [ca0bef19-f719-4f47-b127-ea12a2faa23d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 09:37:24.931171  216293 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-666079" [c17326de-704e-4ba5-ac46-316407ce00f7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 09:37:24.931189  216293 system_pods.go:89] "kube-proxy-65j7p" [e9d046e5-ee7b-43a7-b854-2597df0f1432] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1025 09:37:24.931201  216293 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-666079" [99eaee03-92e9-47a5-8e65-8c20a7a00558] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 09:37:24.931208  216293 system_pods.go:89] "storage-provisioner" [ded1f77d-7f3d-48f0-94ef-13367b475def] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:37:24.931222  216293 system_pods.go:126] duration metric: took 78.726137ms to wait for k8s-apps to be running ...
	I1025 09:37:24.931231  216293 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 09:37:24.931298  216293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:37:24.964502  216293 system_svc.go:56] duration metric: took 33.262733ms WaitForService to wait for kubelet
	I1025 09:37:24.964541  216293 kubeadm.go:586] duration metric: took 10.23973488s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:37:24.964561  216293 node_conditions.go:102] verifying NodePressure condition ...
	I1025 09:37:24.986728  216293 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 09:37:24.986763  216293 node_conditions.go:123] node cpu capacity is 2
	I1025 09:37:24.986776  216293 node_conditions.go:105] duration metric: took 22.209858ms to run NodePressure ...
	I1025 09:37:24.986798  216293 start.go:241] waiting for startup goroutines ...
	I1025 09:37:24.986809  216293 start.go:246] waiting for cluster config update ...
	I1025 09:37:24.986826  216293 start.go:255] writing updated cluster config ...
	I1025 09:37:24.987144  216293 ssh_runner.go:195] Run: rm -f paused
	I1025 09:37:24.993944  216293 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:37:25.039608  216293 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dzmkq" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:37:21.920941  214786 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.500832072s
	I1025 09:37:21.922341  214786 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1025 09:37:21.922715  214786 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1025 09:37:21.923577  214786 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1025 09:37:21.924166  214786 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1025 09:37:27.050559  216293 pod_ready.go:104] pod "coredns-66bc5c9577-dzmkq" is not "Ready", error: <nil>
	W1025 09:37:29.546643  216293 pod_ready.go:104] pod "coredns-66bc5c9577-dzmkq" is not "Ready", error: <nil>
	I1025 09:37:27.376100  214786 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 5.452075196s
	I1025 09:37:30.246995  214786 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 8.322380868s
	I1025 09:37:32.425840  214786 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 10.502647168s
	I1025 09:37:32.451373  214786 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 09:37:32.470313  214786 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 09:37:32.487492  214786 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 09:37:32.487707  214786 kubeadm.go:318] [mark-control-plane] Marking the node auto-068349 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 09:37:32.504207  214786 kubeadm.go:318] [bootstrap-token] Using token: v8cdem.gfd8enqpbrf2mgjt
	I1025 09:37:32.507304  214786 out.go:252]   - Configuring RBAC rules ...
	I1025 09:37:32.507427  214786 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 09:37:32.517413  214786 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 09:37:32.528013  214786 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 09:37:32.533673  214786 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 09:37:32.540267  214786 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 09:37:32.547339  214786 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 09:37:32.833641  214786 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 09:37:33.288624  214786 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1025 09:37:33.842105  214786 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1025 09:37:33.846498  214786 kubeadm.go:318] 
	I1025 09:37:33.846682  214786 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1025 09:37:33.846691  214786 kubeadm.go:318] 
	I1025 09:37:33.846779  214786 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1025 09:37:33.846785  214786 kubeadm.go:318] 
	I1025 09:37:33.846811  214786 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1025 09:37:33.846926  214786 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 09:37:33.846980  214786 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 09:37:33.846985  214786 kubeadm.go:318] 
	I1025 09:37:33.847040  214786 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1025 09:37:33.847044  214786 kubeadm.go:318] 
	I1025 09:37:33.847095  214786 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 09:37:33.847099  214786 kubeadm.go:318] 
	I1025 09:37:33.847153  214786 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1025 09:37:33.847230  214786 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 09:37:33.847301  214786 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 09:37:33.847305  214786 kubeadm.go:318] 
	I1025 09:37:33.847392  214786 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 09:37:33.847471  214786 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1025 09:37:33.847476  214786 kubeadm.go:318] 
	I1025 09:37:33.847562  214786 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token v8cdem.gfd8enqpbrf2mgjt \
	I1025 09:37:33.847669  214786 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:e54eda66e4b00d0a990c315bf332782d8922fe6750f8b2bc2791b23f9457095b \
	I1025 09:37:33.847694  214786 kubeadm.go:318] 	--control-plane 
	I1025 09:37:33.847698  214786 kubeadm.go:318] 
	I1025 09:37:33.847786  214786 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1025 09:37:33.847790  214786 kubeadm.go:318] 
	I1025 09:37:33.847876  214786 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token v8cdem.gfd8enqpbrf2mgjt \
	I1025 09:37:33.848008  214786 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:e54eda66e4b00d0a990c315bf332782d8922fe6750f8b2bc2791b23f9457095b 
	I1025 09:37:33.860121  214786 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1025 09:37:33.860344  214786 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1025 09:37:33.860447  214786 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 09:37:33.860462  214786 cni.go:84] Creating CNI manager for ""
	I1025 09:37:33.860469  214786 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:37:33.864168  214786 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1025 09:37:31.548144  216293 pod_ready.go:104] pod "coredns-66bc5c9577-dzmkq" is not "Ready", error: <nil>
	W1025 09:37:33.562306  216293 pod_ready.go:104] pod "coredns-66bc5c9577-dzmkq" is not "Ready", error: <nil>
	I1025 09:37:33.867296  214786 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 09:37:33.878468  214786 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1025 09:37:33.878486  214786 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1025 09:37:33.917722  214786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 09:37:34.397766  214786 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 09:37:34.397900  214786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:37:34.397994  214786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-068349 minikube.k8s.io/updated_at=2025_10_25T09_37_34_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c3de5ce1075e776fa55a26fe8396669cc53a4373 minikube.k8s.io/name=auto-068349 minikube.k8s.io/primary=true
	I1025 09:37:34.834936  214786 ops.go:34] apiserver oom_adj: -16
	I1025 09:37:34.835046  214786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:37:35.335825  214786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:37:35.835609  214786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:37:36.335143  214786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:37:36.835359  214786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:37:37.335703  214786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:37:37.835152  214786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:37:38.335512  214786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:37:38.473931  214786 kubeadm.go:1113] duration metric: took 4.076071803s to wait for elevateKubeSystemPrivileges
	I1025 09:37:38.473973  214786 kubeadm.go:402] duration metric: took 28.759573856s to StartCluster
	I1025 09:37:38.474011  214786 settings.go:142] acquiring lock: {Name:mk0d8a31751f5e359928da7d7271367bd4b397fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:37:38.474111  214786 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21796-2312/kubeconfig
	I1025 09:37:38.475285  214786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/kubeconfig: {Name:mk29749a0edbb941c188588ea6aef25a517ab571 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:37:38.475560  214786 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:37:38.475702  214786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 09:37:38.476023  214786 config.go:182] Loaded profile config "auto-068349": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:37:38.476092  214786 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 09:37:38.476170  214786 addons.go:69] Setting storage-provisioner=true in profile "auto-068349"
	I1025 09:37:38.476183  214786 addons.go:238] Setting addon storage-provisioner=true in "auto-068349"
	I1025 09:37:38.476222  214786 host.go:66] Checking if "auto-068349" exists ...
	I1025 09:37:38.477614  214786 cli_runner.go:164] Run: docker container inspect auto-068349 --format={{.State.Status}}
	I1025 09:37:38.477801  214786 addons.go:69] Setting default-storageclass=true in profile "auto-068349"
	I1025 09:37:38.477862  214786 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-068349"
	I1025 09:37:38.478378  214786 cli_runner.go:164] Run: docker container inspect auto-068349 --format={{.State.Status}}
	I1025 09:37:38.480453  214786 out.go:179] * Verifying Kubernetes components...
	I1025 09:37:38.486298  214786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:37:38.524301  214786 addons.go:238] Setting addon default-storageclass=true in "auto-068349"
	I1025 09:37:38.524351  214786 host.go:66] Checking if "auto-068349" exists ...
	I1025 09:37:38.524878  214786 cli_runner.go:164] Run: docker container inspect auto-068349 --format={{.State.Status}}
	I1025 09:37:38.551887  214786 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 09:37:38.556857  214786 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:37:38.556880  214786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 09:37:38.556972  214786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-068349
	I1025 09:37:38.573486  214786 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 09:37:38.573507  214786 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 09:37:38.573576  214786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-068349
	I1025 09:37:38.586416  214786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/auto-068349/id_rsa Username:docker}
	I1025 09:37:38.624836  214786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/auto-068349/id_rsa Username:docker}
	I1025 09:37:39.231920  214786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 09:37:39.246329  214786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:37:39.262489  214786 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:37:39.262926  214786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 09:37:40.470934  214786 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.224516428s)
	I1025 09:37:40.471231  214786 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.20856242s)
	I1025 09:37:40.471395  214786 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.208413471s)
	I1025 09:37:40.471419  214786 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1025 09:37:40.472304  214786 node_ready.go:35] waiting up to 15m0s for node "auto-068349" to be "Ready" ...
	I1025 09:37:40.475266  214786 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	W1025 09:37:36.045492  216293 pod_ready.go:104] pod "coredns-66bc5c9577-dzmkq" is not "Ready", error: <nil>
	W1025 09:37:38.048242  216293 pod_ready.go:104] pod "coredns-66bc5c9577-dzmkq" is not "Ready", error: <nil>
	W1025 09:37:40.545236  216293 pod_ready.go:104] pod "coredns-66bc5c9577-dzmkq" is not "Ready", error: <nil>
	I1025 09:37:40.479338  214786 addons.go:514] duration metric: took 2.003242983s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1025 09:37:40.976821  214786 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-068349" context rescaled to 1 replicas
	W1025 09:37:42.545876  216293 pod_ready.go:104] pod "coredns-66bc5c9577-dzmkq" is not "Ready", error: <nil>
	W1025 09:37:45.060094  216293 pod_ready.go:104] pod "coredns-66bc5c9577-dzmkq" is not "Ready", error: <nil>
	W1025 09:37:42.475416  214786 node_ready.go:57] node "auto-068349" has "Ready":"False" status (will retry)
	W1025 09:37:44.475720  214786 node_ready.go:57] node "auto-068349" has "Ready":"False" status (will retry)
	W1025 09:37:47.549605  216293 pod_ready.go:104] pod "coredns-66bc5c9577-dzmkq" is not "Ready", error: <nil>
	W1025 09:37:50.050862  216293 pod_ready.go:104] pod "coredns-66bc5c9577-dzmkq" is not "Ready", error: <nil>
	W1025 09:37:46.975791  214786 node_ready.go:57] node "auto-068349" has "Ready":"False" status (will retry)
	W1025 09:37:49.477387  214786 node_ready.go:57] node "auto-068349" has "Ready":"False" status (will retry)
	W1025 09:37:52.545927  216293 pod_ready.go:104] pod "coredns-66bc5c9577-dzmkq" is not "Ready", error: <nil>
	W1025 09:37:54.546731  216293 pod_ready.go:104] pod "coredns-66bc5c9577-dzmkq" is not "Ready", error: <nil>
	W1025 09:37:51.976660  214786 node_ready.go:57] node "auto-068349" has "Ready":"False" status (will retry)
	W1025 09:37:54.475719  214786 node_ready.go:57] node "auto-068349" has "Ready":"False" status (will retry)
	W1025 09:37:57.046246  216293 pod_ready.go:104] pod "coredns-66bc5c9577-dzmkq" is not "Ready", error: <nil>
	I1025 09:37:59.044952  216293 pod_ready.go:94] pod "coredns-66bc5c9577-dzmkq" is "Ready"
	I1025 09:37:59.044983  216293 pod_ready.go:86] duration metric: took 34.005343863s for pod "coredns-66bc5c9577-dzmkq" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:37:59.048184  216293 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-666079" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:37:59.052576  216293 pod_ready.go:94] pod "etcd-default-k8s-diff-port-666079" is "Ready"
	I1025 09:37:59.052604  216293 pod_ready.go:86] duration metric: took 4.395273ms for pod "etcd-default-k8s-diff-port-666079" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:37:59.054930  216293 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-666079" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:37:59.059303  216293 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-666079" is "Ready"
	I1025 09:37:59.059334  216293 pod_ready.go:86] duration metric: took 4.378887ms for pod "kube-apiserver-default-k8s-diff-port-666079" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:37:59.061643  216293 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-666079" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:37:59.242819  216293 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-666079" is "Ready"
	I1025 09:37:59.242850  216293 pod_ready.go:86] duration metric: took 181.182508ms for pod "kube-controller-manager-default-k8s-diff-port-666079" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:37:59.442983  216293 pod_ready.go:83] waiting for pod "kube-proxy-65j7p" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:37:59.843100  216293 pod_ready.go:94] pod "kube-proxy-65j7p" is "Ready"
	I1025 09:37:59.843127  216293 pod_ready.go:86] duration metric: took 400.116153ms for pod "kube-proxy-65j7p" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:38:00.057426  216293 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-666079" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:38:00.443739  216293 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-666079" is "Ready"
	I1025 09:38:00.443766  216293 pod_ready.go:86] duration metric: took 386.312856ms for pod "kube-scheduler-default-k8s-diff-port-666079" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:38:00.443780  216293 pod_ready.go:40] duration metric: took 35.449801908s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:38:00.519304  216293 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1025 09:38:00.522677  216293 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-666079" cluster and "default" namespace by default
	W1025 09:37:56.976229  214786 node_ready.go:57] node "auto-068349" has "Ready":"False" status (will retry)
	W1025 09:37:59.475470  214786 node_ready.go:57] node "auto-068349" has "Ready":"False" status (will retry)
	W1025 09:38:01.976079  214786 node_ready.go:57] node "auto-068349" has "Ready":"False" status (will retry)
	W1025 09:38:04.474824  214786 node_ready.go:57] node "auto-068349" has "Ready":"False" status (will retry)
	W1025 09:38:06.475756  214786 node_ready.go:57] node "auto-068349" has "Ready":"False" status (will retry)
	W1025 09:38:08.975352  214786 node_ready.go:57] node "auto-068349" has "Ready":"False" status (will retry)
	W1025 09:38:10.975524  214786 node_ready.go:57] node "auto-068349" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 25 09:37:52 default-k8s-diff-port-666079 crio[655]: time="2025-10-25T09:37:52.202561278Z" level=info msg="Removed container 5aaa7986e6f9d7b8cf1311e668b322abf9c9f26c1f9f24fefe78e4d2da758fb4: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8cm6/dashboard-metrics-scraper" id=7dfb810c-b83c-4675-ac50-4a7176939ddd name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 09:37:55 default-k8s-diff-port-666079 conmon[1194]: conmon 6c54fce55676c84d4384 <ninfo>: container 1197 exited with status 1
	Oct 25 09:37:55 default-k8s-diff-port-666079 crio[655]: time="2025-10-25T09:37:55.198081874Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=37d9c1fa-b55a-4ab5-b23c-30835d492f86 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:37:55 default-k8s-diff-port-666079 crio[655]: time="2025-10-25T09:37:55.199179461Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=e809007c-09f2-4071-a85e-6b218d38e09a name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:37:55 default-k8s-diff-port-666079 crio[655]: time="2025-10-25T09:37:55.200356008Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=765f365a-e7f6-4ce6-a867-7c25d177ea31 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:37:55 default-k8s-diff-port-666079 crio[655]: time="2025-10-25T09:37:55.200479119Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:37:55 default-k8s-diff-port-666079 crio[655]: time="2025-10-25T09:37:55.206833979Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:37:55 default-k8s-diff-port-666079 crio[655]: time="2025-10-25T09:37:55.207062593Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/9f9beb9dac1b11cf97a47088db5dd6555f62a179bef274e25b055acd8744bef7/merged/etc/passwd: no such file or directory"
	Oct 25 09:37:55 default-k8s-diff-port-666079 crio[655]: time="2025-10-25T09:37:55.207088874Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/9f9beb9dac1b11cf97a47088db5dd6555f62a179bef274e25b055acd8744bef7/merged/etc/group: no such file or directory"
	Oct 25 09:37:55 default-k8s-diff-port-666079 crio[655]: time="2025-10-25T09:37:55.207432254Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:37:55 default-k8s-diff-port-666079 crio[655]: time="2025-10-25T09:37:55.225628378Z" level=info msg="Created container fcd374f1688b8ed4e7571ec994e121446bd298b6684eee33d3b3ab788c09fd2a: kube-system/storage-provisioner/storage-provisioner" id=765f365a-e7f6-4ce6-a867-7c25d177ea31 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:37:55 default-k8s-diff-port-666079 crio[655]: time="2025-10-25T09:37:55.227531578Z" level=info msg="Starting container: fcd374f1688b8ed4e7571ec994e121446bd298b6684eee33d3b3ab788c09fd2a" id=d9a12f8d-8309-4b8b-b72c-e91012f3de0e name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:37:55 default-k8s-diff-port-666079 crio[655]: time="2025-10-25T09:37:55.229272348Z" level=info msg="Started container" PID=1647 containerID=fcd374f1688b8ed4e7571ec994e121446bd298b6684eee33d3b3ab788c09fd2a description=kube-system/storage-provisioner/storage-provisioner id=d9a12f8d-8309-4b8b-b72c-e91012f3de0e name=/runtime.v1.RuntimeService/StartContainer sandboxID=dec8a1dfb7e4d319fa2dbe078630b07ecf1c6e812e0797d7c107bc8fa3dc4e66
	Oct 25 09:38:04 default-k8s-diff-port-666079 crio[655]: time="2025-10-25T09:38:04.154440761Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 09:38:04 default-k8s-diff-port-666079 crio[655]: time="2025-10-25T09:38:04.159604519Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 09:38:04 default-k8s-diff-port-666079 crio[655]: time="2025-10-25T09:38:04.159642123Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 09:38:04 default-k8s-diff-port-666079 crio[655]: time="2025-10-25T09:38:04.159675092Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 09:38:04 default-k8s-diff-port-666079 crio[655]: time="2025-10-25T09:38:04.164899289Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 09:38:04 default-k8s-diff-port-666079 crio[655]: time="2025-10-25T09:38:04.164932684Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 09:38:04 default-k8s-diff-port-666079 crio[655]: time="2025-10-25T09:38:04.164954436Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 09:38:04 default-k8s-diff-port-666079 crio[655]: time="2025-10-25T09:38:04.168254838Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 09:38:04 default-k8s-diff-port-666079 crio[655]: time="2025-10-25T09:38:04.168290301Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 09:38:04 default-k8s-diff-port-666079 crio[655]: time="2025-10-25T09:38:04.16831376Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 09:38:04 default-k8s-diff-port-666079 crio[655]: time="2025-10-25T09:38:04.171532183Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 09:38:04 default-k8s-diff-port-666079 crio[655]: time="2025-10-25T09:38:04.171588446Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	fcd374f1688b8       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           20 seconds ago       Running             storage-provisioner         2                   dec8a1dfb7e4d       storage-provisioner                                    kube-system
	7ba993183832e       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           24 seconds ago       Exited              dashboard-metrics-scraper   2                   9e2d2ef412434       dashboard-metrics-scraper-6ffb444bf9-n8cm6             kubernetes-dashboard
	8fbb9eadefbd8       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   36 seconds ago       Running             kubernetes-dashboard        0                   6d9aa50a0365d       kubernetes-dashboard-855c9754f9-v6j8w                  kubernetes-dashboard
	6c54fce55676c       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           51 seconds ago       Exited              storage-provisioner         1                   dec8a1dfb7e4d       storage-provisioner                                    kube-system
	d2e32fa53d02a       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           52 seconds ago       Running             kube-proxy                  1                   06e23fc1ccd57       kube-proxy-65j7p                                       kube-system
	e166325c1923d       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           52 seconds ago       Running             coredns                     1                   dd681068014df       coredns-66bc5c9577-dzmkq                               kube-system
	bcf200eeb50f5       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           52 seconds ago       Running             kindnet-cni                 1                   7e1c0a2f8ff65       kindnet-28vnv                                          kube-system
	19f2c73a067d0       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           53 seconds ago       Running             busybox                     1                   77286359144c8       busybox                                                default
	c26bf38fd7e4b       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   ac73eebc77c0f       kube-apiserver-default-k8s-diff-port-666079            kube-system
	93c1d103bf05e       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   d6c624662f01d       etcd-default-k8s-diff-port-666079                      kube-system
	fe95bac5f1e76       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   97c04f20a1252       kube-controller-manager-default-k8s-diff-port-666079   kube-system
	36dbd5d0fba8f       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   77bfab8d2e645       kube-scheduler-default-k8s-diff-port-666079            kube-system
	
	
	==> coredns [e166325c1923d08d8647a1a3c29bf323317468e389b1f4993ca6afafc167012d] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48613 - 30511 "HINFO IN 3318603629285010333.775493663660593314. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.016979054s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-666079
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-666079
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3de5ce1075e776fa55a26fe8396669cc53a4373
	                    minikube.k8s.io/name=default-k8s-diff-port-666079
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_35_52_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:35:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-666079
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:38:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:38:02 +0000   Sat, 25 Oct 2025 09:35:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:38:02 +0000   Sat, 25 Oct 2025 09:35:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:38:02 +0000   Sat, 25 Oct 2025 09:35:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:38:02 +0000   Sat, 25 Oct 2025 09:36:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-666079
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                492daa44-3080-463c-abfd-050b629beadb
	  Boot ID:                    89aa6167-e700-462e-8969-7739a9f33dc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-66bc5c9577-dzmkq                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m20s
	  kube-system                 etcd-default-k8s-diff-port-666079                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m25s
	  kube-system                 kindnet-28vnv                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m20s
	  kube-system                 kube-apiserver-default-k8s-diff-port-666079             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-666079    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-proxy-65j7p                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 kube-scheduler-default-k8s-diff-port-666079             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m19s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-n8cm6              0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-v6j8w                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m18s                  kube-proxy       
	  Normal   Starting                 49s                    kube-proxy       
	  Warning  CgroupV1                 2m36s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m35s (x8 over 2m36s)  kubelet          Node default-k8s-diff-port-666079 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m35s (x8 over 2m36s)  kubelet          Node default-k8s-diff-port-666079 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m35s (x8 over 2m36s)  kubelet          Node default-k8s-diff-port-666079 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m26s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m26s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m25s                  kubelet          Node default-k8s-diff-port-666079 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m25s                  kubelet          Node default-k8s-diff-port-666079 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m25s                  kubelet          Node default-k8s-diff-port-666079 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m21s                  node-controller  Node default-k8s-diff-port-666079 event: Registered Node default-k8s-diff-port-666079 in Controller
	  Normal   NodeReady                98s                    kubelet          Node default-k8s-diff-port-666079 status is now: NodeReady
	  Normal   Starting                 63s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 63s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  63s (x8 over 63s)      kubelet          Node default-k8s-diff-port-666079 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    63s (x8 over 63s)      kubelet          Node default-k8s-diff-port-666079 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     63s (x8 over 63s)      kubelet          Node default-k8s-diff-port-666079 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           50s                    node-controller  Node default-k8s-diff-port-666079 event: Registered Node default-k8s-diff-port-666079 in Controller
	
	
	==> dmesg <==
	[Oct25 09:15] overlayfs: idmapped layers are currently not supported
	[ +40.253798] overlayfs: idmapped layers are currently not supported
	[Oct25 09:16] overlayfs: idmapped layers are currently not supported
	[Oct25 09:17] overlayfs: idmapped layers are currently not supported
	[Oct25 09:18] overlayfs: idmapped layers are currently not supported
	[  +9.191019] hrtimer: interrupt took 13462946 ns
	[Oct25 09:20] overlayfs: idmapped layers are currently not supported
	[Oct25 09:22] overlayfs: idmapped layers are currently not supported
	[Oct25 09:23] overlayfs: idmapped layers are currently not supported
	[Oct25 09:24] overlayfs: idmapped layers are currently not supported
	[Oct25 09:28] overlayfs: idmapped layers are currently not supported
	[ +37.283444] overlayfs: idmapped layers are currently not supported
	[Oct25 09:29] overlayfs: idmapped layers are currently not supported
	[ +38.328802] overlayfs: idmapped layers are currently not supported
	[Oct25 09:30] overlayfs: idmapped layers are currently not supported
	[Oct25 09:31] overlayfs: idmapped layers are currently not supported
	[Oct25 09:32] overlayfs: idmapped layers are currently not supported
	[Oct25 09:33] overlayfs: idmapped layers are currently not supported
	[Oct25 09:34] overlayfs: idmapped layers are currently not supported
	[ +28.289706] overlayfs: idmapped layers are currently not supported
	[Oct25 09:35] overlayfs: idmapped layers are currently not supported
	[Oct25 09:36] overlayfs: idmapped layers are currently not supported
	[ +24.160248] overlayfs: idmapped layers are currently not supported
	[Oct25 09:37] overlayfs: idmapped layers are currently not supported
	[  +8.216028] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [93c1d103bf05eb8996db42684ab453c3e8a59e4287467d1fb344225e54155651] <==
	{"level":"warn","ts":"2025-10-25T09:37:19.688431Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:37:19.719061Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:37:19.757463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:37:19.774155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:37:19.810854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:37:19.829355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:37:19.844944Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:37:19.872568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:37:19.898108Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:37:19.926973Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:37:19.958450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:37:19.978080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:37:20.017889Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:37:20.036284Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:37:20.059042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:37:20.101886Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:37:20.140049Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:37:20.167802Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:37:20.191613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:37:20.214016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:37:20.252171Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:37:20.267863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:37:20.321768Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:37:20.333125Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:37:20.430871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35506","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:38:16 up  1:20,  0 user,  load average: 4.30, 4.16, 3.25
	Linux default-k8s-diff-port-666079 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [bcf200eeb50f5e2d26ad7b92d990c6b3d8d58108b4336e8005c6dfaaaa9cbc6b] <==
	I1025 09:37:23.925487       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 09:37:23.925749       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1025 09:37:23.925858       1 main.go:148] setting mtu 1500 for CNI 
	I1025 09:37:23.925869       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 09:37:23.925882       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T09:37:24Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 09:37:24.154652       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 09:37:24.154675       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 09:37:24.154685       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 09:37:24.154971       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1025 09:37:54.154408       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1025 09:37:54.155358       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1025 09:37:54.155361       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1025 09:37:54.155475       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1025 09:37:55.455026       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 09:37:55.455135       1 metrics.go:72] Registering metrics
	I1025 09:37:55.455231       1 controller.go:711] "Syncing nftables rules"
	I1025 09:38:04.154104       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1025 09:38:04.154169       1 main.go:301] handling current node
	I1025 09:38:14.162929       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1025 09:38:14.162961       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c26bf38fd7e4b9f51947f954e4ee102888ffd02a198adb203972580c4eb3c74d] <==
	I1025 09:37:21.802590       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1025 09:37:21.813928       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1025 09:37:21.862647       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1025 09:37:21.878555       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1025 09:37:21.888307       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1025 09:37:21.888332       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1025 09:37:21.888433       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1025 09:37:21.888664       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1025 09:37:21.896241       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:37:21.897327       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1025 09:37:21.897343       1 policy_source.go:240] refreshing policies
	I1025 09:37:21.910055       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1025 09:37:21.942220       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 09:37:22.009302       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	E1025 09:37:22.017937       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1025 09:37:22.498687       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 09:37:23.803475       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 09:37:24.177737       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 09:37:24.472468       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 09:37:24.531400       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 09:37:24.725916       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.113.205"}
	I1025 09:37:24.778838       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.27.36"}
	I1025 09:37:26.099157       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 09:37:26.479049       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 09:37:26.611444       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [fe95bac5f1e76131716e125587dd727d7db7bdabeed57b1078cc75158bc0da09] <==
	I1025 09:37:26.059985       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1025 09:37:26.069082       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1025 09:37:26.071532       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1025 09:37:26.071806       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1025 09:37:26.071856       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1025 09:37:26.077676       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1025 09:37:26.077860       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1025 09:37:26.077938       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1025 09:37:26.078009       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1025 09:37:26.078045       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1025 09:37:26.078085       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1025 09:37:26.078804       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1025 09:37:26.081585       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:37:26.085084       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1025 09:37:26.089316       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 09:37:26.091936       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1025 09:37:26.092052       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:37:26.094060       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1025 09:37:26.098506       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1025 09:37:26.107279       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1025 09:37:26.108654       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1025 09:37:26.131557       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1025 09:37:26.139884       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:37:26.139970       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 09:37:26.140001       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [d2e32fa53d02a52e39f9a3c61406c3eba615d2628a65822c3a98cee9707208b7] <==
	I1025 09:37:25.267273       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:37:25.611525       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:37:25.915484       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:37:25.915524       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1025 09:37:25.915592       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:37:26.138276       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:37:26.138401       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:37:26.200382       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:37:26.200767       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:37:26.200829       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:37:26.201937       1 config.go:200] "Starting service config controller"
	I1025 09:37:26.201964       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:37:26.204167       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:37:26.204209       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:37:26.204268       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:37:26.204294       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:37:26.204963       1 config.go:309] "Starting node config controller"
	I1025 09:37:26.207445       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:37:26.207514       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:37:26.302391       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 09:37:26.304599       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 09:37:26.304624       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [36dbd5d0fba8fd463698b1cfb95820a97032c9e08ee3218bc4e23d5db821fa62] <==
	I1025 09:37:19.066157       1 serving.go:386] Generated self-signed cert in-memory
	I1025 09:37:25.609649       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 09:37:25.609737       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:37:25.627595       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1025 09:37:25.627709       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1025 09:37:25.627830       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:37:25.627924       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:37:25.627983       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 09:37:25.628025       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 09:37:25.629298       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 09:37:25.629387       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 09:37:25.740055       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1025 09:37:25.740180       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:37:25.740927       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 25 09:37:26 default-k8s-diff-port-666079 kubelet[783]: I1025 09:37:26.735748     783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8d8e464a-fad4-4966-91eb-5d8b916d9ed7-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-v6j8w\" (UID: \"8d8e464a-fad4-4966-91eb-5d8b916d9ed7\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-v6j8w"
	Oct 25 09:37:26 default-k8s-diff-port-666079 kubelet[783]: I1025 09:37:26.735771     783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4v57\" (UniqueName: \"kubernetes.io/projected/8d8e464a-fad4-4966-91eb-5d8b916d9ed7-kube-api-access-f4v57\") pod \"kubernetes-dashboard-855c9754f9-v6j8w\" (UID: \"8d8e464a-fad4-4966-91eb-5d8b916d9ed7\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-v6j8w"
	Oct 25 09:37:26 default-k8s-diff-port-666079 kubelet[783]: I1025 09:37:26.735793     783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/280832ce-76f8-440d-a575-b77df2e00e0a-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-n8cm6\" (UID: \"280832ce-76f8-440d-a575-b77df2e00e0a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8cm6"
	Oct 25 09:37:26 default-k8s-diff-port-666079 kubelet[783]: W1025 09:37:26.970577     783 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/957d2a4135a8dcb0784416c8d5211d69b209106d0d8dfa7985ab81c8fdcf8862/crio-9e2d2ef41243482de269af45b541cbce6b1e309bc4ccff5479fa2c51a01110e4 WatchSource:0}: Error finding container 9e2d2ef41243482de269af45b541cbce6b1e309bc4ccff5479fa2c51a01110e4: Status 404 returned error can't find the container with id 9e2d2ef41243482de269af45b541cbce6b1e309bc4ccff5479fa2c51a01110e4
	Oct 25 09:37:26 default-k8s-diff-port-666079 kubelet[783]: W1025 09:37:26.997639     783 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/957d2a4135a8dcb0784416c8d5211d69b209106d0d8dfa7985ab81c8fdcf8862/crio-6d9aa50a0365d84227fe63cb67f5eb2df0a85e2a1477646ed549bfb7a8ac5d5b WatchSource:0}: Error finding container 6d9aa50a0365d84227fe63cb67f5eb2df0a85e2a1477646ed549bfb7a8ac5d5b: Status 404 returned error can't find the container with id 6d9aa50a0365d84227fe63cb67f5eb2df0a85e2a1477646ed549bfb7a8ac5d5b
	Oct 25 09:37:28 default-k8s-diff-port-666079 kubelet[783]: I1025 09:37:28.848308     783 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 25 09:37:34 default-k8s-diff-port-666079 kubelet[783]: I1025 09:37:34.111634     783 scope.go:117] "RemoveContainer" containerID="8d2938558d3d5a4dccdaebd7a18351c5b20a9c6225f1279ba3423be4339911c9"
	Oct 25 09:37:35 default-k8s-diff-port-666079 kubelet[783]: I1025 09:37:35.117008     783 scope.go:117] "RemoveContainer" containerID="8d2938558d3d5a4dccdaebd7a18351c5b20a9c6225f1279ba3423be4339911c9"
	Oct 25 09:37:35 default-k8s-diff-port-666079 kubelet[783]: I1025 09:37:35.117298     783 scope.go:117] "RemoveContainer" containerID="5aaa7986e6f9d7b8cf1311e668b322abf9c9f26c1f9f24fefe78e4d2da758fb4"
	Oct 25 09:37:35 default-k8s-diff-port-666079 kubelet[783]: E1025 09:37:35.117440     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-n8cm6_kubernetes-dashboard(280832ce-76f8-440d-a575-b77df2e00e0a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8cm6" podUID="280832ce-76f8-440d-a575-b77df2e00e0a"
	Oct 25 09:37:36 default-k8s-diff-port-666079 kubelet[783]: I1025 09:37:36.909039     783 scope.go:117] "RemoveContainer" containerID="5aaa7986e6f9d7b8cf1311e668b322abf9c9f26c1f9f24fefe78e4d2da758fb4"
	Oct 25 09:37:36 default-k8s-diff-port-666079 kubelet[783]: E1025 09:37:36.909233     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-n8cm6_kubernetes-dashboard(280832ce-76f8-440d-a575-b77df2e00e0a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8cm6" podUID="280832ce-76f8-440d-a575-b77df2e00e0a"
	Oct 25 09:37:51 default-k8s-diff-port-666079 kubelet[783]: I1025 09:37:51.792413     783 scope.go:117] "RemoveContainer" containerID="5aaa7986e6f9d7b8cf1311e668b322abf9c9f26c1f9f24fefe78e4d2da758fb4"
	Oct 25 09:37:52 default-k8s-diff-port-666079 kubelet[783]: I1025 09:37:52.187359     783 scope.go:117] "RemoveContainer" containerID="5aaa7986e6f9d7b8cf1311e668b322abf9c9f26c1f9f24fefe78e4d2da758fb4"
	Oct 25 09:37:52 default-k8s-diff-port-666079 kubelet[783]: I1025 09:37:52.187668     783 scope.go:117] "RemoveContainer" containerID="7ba993183832edaaf183af2a7b8cff0dbc3f87072503fc42186cda8f2ee1e23c"
	Oct 25 09:37:52 default-k8s-diff-port-666079 kubelet[783]: E1025 09:37:52.187842     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-n8cm6_kubernetes-dashboard(280832ce-76f8-440d-a575-b77df2e00e0a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8cm6" podUID="280832ce-76f8-440d-a575-b77df2e00e0a"
	Oct 25 09:37:52 default-k8s-diff-port-666079 kubelet[783]: I1025 09:37:52.211693     783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-v6j8w" podStartSLOduration=13.900403321 podStartE2EDuration="26.211670393s" podCreationTimestamp="2025-10-25 09:37:26 +0000 UTC" firstStartedPulling="2025-10-25 09:37:27.008219206 +0000 UTC m=+13.533618711" lastFinishedPulling="2025-10-25 09:37:39.319486278 +0000 UTC m=+25.844885783" observedRunningTime="2025-10-25 09:37:40.215444037 +0000 UTC m=+26.740843550" watchObservedRunningTime="2025-10-25 09:37:52.211670393 +0000 UTC m=+38.737069906"
	Oct 25 09:37:55 default-k8s-diff-port-666079 kubelet[783]: I1025 09:37:55.197109     783 scope.go:117] "RemoveContainer" containerID="6c54fce55676c84d4384dd7ac96ecf2530d5a363686e91690dc3545792bcc0b6"
	Oct 25 09:37:56 default-k8s-diff-port-666079 kubelet[783]: I1025 09:37:56.909427     783 scope.go:117] "RemoveContainer" containerID="7ba993183832edaaf183af2a7b8cff0dbc3f87072503fc42186cda8f2ee1e23c"
	Oct 25 09:37:56 default-k8s-diff-port-666079 kubelet[783]: E1025 09:37:56.910126     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-n8cm6_kubernetes-dashboard(280832ce-76f8-440d-a575-b77df2e00e0a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8cm6" podUID="280832ce-76f8-440d-a575-b77df2e00e0a"
	Oct 25 09:38:07 default-k8s-diff-port-666079 kubelet[783]: I1025 09:38:07.792344     783 scope.go:117] "RemoveContainer" containerID="7ba993183832edaaf183af2a7b8cff0dbc3f87072503fc42186cda8f2ee1e23c"
	Oct 25 09:38:07 default-k8s-diff-port-666079 kubelet[783]: E1025 09:38:07.793398     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-n8cm6_kubernetes-dashboard(280832ce-76f8-440d-a575-b77df2e00e0a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8cm6" podUID="280832ce-76f8-440d-a575-b77df2e00e0a"
	Oct 25 09:38:12 default-k8s-diff-port-666079 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 09:38:12 default-k8s-diff-port-666079 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 09:38:12 default-k8s-diff-port-666079 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [8fbb9eadefbd80b899692ec9dd8c86fba760ca25136cdb11e58fcf1c5b382d3f] <==
	2025/10/25 09:37:39 Starting overwatch
	2025/10/25 09:37:39 Using namespace: kubernetes-dashboard
	2025/10/25 09:37:39 Using in-cluster config to connect to apiserver
	2025/10/25 09:37:39 Using secret token for csrf signing
	2025/10/25 09:37:39 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/25 09:37:39 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/25 09:37:39 Successful initial request to the apiserver, version: v1.34.1
	2025/10/25 09:37:39 Generating JWE encryption key
	2025/10/25 09:37:39 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/25 09:37:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/25 09:37:39 Initializing JWE encryption key from synchronized object
	2025/10/25 09:37:39 Creating in-cluster Sidecar client
	2025/10/25 09:37:39 Serving insecurely on HTTP port: 9090
	2025/10/25 09:37:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 09:38:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [6c54fce55676c84d4384dd7ac96ecf2530d5a363686e91690dc3545792bcc0b6] <==
	I1025 09:37:25.079186       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1025 09:37:55.105182       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [fcd374f1688b8ed4e7571ec994e121446bd298b6684eee33d3b3ab788c09fd2a] <==
	I1025 09:37:55.247761       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 09:37:55.259852       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 09:37:55.259983       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1025 09:37:55.262304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:37:58.717522       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:38:02.978430       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:38:06.577099       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:38:09.630876       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:38:12.655245       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:38:12.664491       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 09:38:12.664731       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 09:38:12.667371       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-666079_ac0899b9-7a72-4472-90fc-3a6456555790!
	I1025 09:38:12.669098       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a28afa3c-22cb-41cd-9bf1-a7e2b455d9f3", APIVersion:"v1", ResourceVersion:"697", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-666079_ac0899b9-7a72-4472-90fc-3a6456555790 became leader
	W1025 09:38:12.675943       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:38:12.687741       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 09:38:12.767827       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-666079_ac0899b9-7a72-4472-90fc-3a6456555790!
	W1025 09:38:14.691374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:38:14.700968       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-666079 -n default-k8s-diff-port-666079
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-666079 -n default-k8s-diff-port-666079: exit status 2 (397.378669ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-666079 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-666079
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-666079:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "957d2a4135a8dcb0784416c8d5211d69b209106d0d8dfa7985ab81c8fdcf8862",
	        "Created": "2025-10-25T09:35:22.279167682Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 216468,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:37:05.932876673Z",
	            "FinishedAt": "2025-10-25T09:37:04.909610866Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/957d2a4135a8dcb0784416c8d5211d69b209106d0d8dfa7985ab81c8fdcf8862/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/957d2a4135a8dcb0784416c8d5211d69b209106d0d8dfa7985ab81c8fdcf8862/hostname",
	        "HostsPath": "/var/lib/docker/containers/957d2a4135a8dcb0784416c8d5211d69b209106d0d8dfa7985ab81c8fdcf8862/hosts",
	        "LogPath": "/var/lib/docker/containers/957d2a4135a8dcb0784416c8d5211d69b209106d0d8dfa7985ab81c8fdcf8862/957d2a4135a8dcb0784416c8d5211d69b209106d0d8dfa7985ab81c8fdcf8862-json.log",
	        "Name": "/default-k8s-diff-port-666079",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-666079:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-666079",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "957d2a4135a8dcb0784416c8d5211d69b209106d0d8dfa7985ab81c8fdcf8862",
	                "LowerDir": "/var/lib/docker/overlay2/5a2d5db135a98df69094a4c9e2b07f83f6518ae35a00f1aa521570af97c1888e-init/diff:/var/lib/docker/overlay2/cef79fc16ca3e257f9c9390d34fae091a7f7fa219913b2606caf1922ea50ed93/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5a2d5db135a98df69094a4c9e2b07f83f6518ae35a00f1aa521570af97c1888e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5a2d5db135a98df69094a4c9e2b07f83f6518ae35a00f1aa521570af97c1888e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5a2d5db135a98df69094a4c9e2b07f83f6518ae35a00f1aa521570af97c1888e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-666079",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-666079/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-666079",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-666079",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-666079",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c5ec8df35b5632d691aa0030a2690f1f0c45149472d5069b8fc7096388cff0f6",
	            "SandboxKey": "/var/run/docker/netns/c5ec8df35b56",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-666079": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "22:8e:44:79:f7:6e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fca20c11b6d784ec6e97d5309475016004c54db4ff0e1ebce1147f0efda81f09",
	                    "EndpointID": "6098d684082efe9a456343f860d07b14b985e08abf67e62215a74dcd6b756080",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-666079",
	                        "957d2a4135a8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-666079 -n default-k8s-diff-port-666079
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-666079 -n default-k8s-diff-port-666079: exit status 2 (342.817379ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-666079 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-666079 logs -n 25: (1.315324316s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p no-preload-179869 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-179869            │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │                     │
	│ delete  │ -p no-preload-179869                                                                                                                                                                                                                          │ no-preload-179869            │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
	│ delete  │ -p no-preload-179869                                                                                                                                                                                                                          │ no-preload-179869            │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
	│ delete  │ -p disable-driver-mounts-901717                                                                                                                                                                                                               │ disable-driver-mounts-901717 │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
	│ start   │ -p default-k8s-diff-port-666079 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-666079 │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:36 UTC │
	│ image   │ embed-certs-173264 image list --format=json                                                                                                                                                                                                   │ embed-certs-173264           │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
	│ pause   │ -p embed-certs-173264 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-173264           │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │                     │
	│ delete  │ -p embed-certs-173264                                                                                                                                                                                                                         │ embed-certs-173264           │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
	│ delete  │ -p embed-certs-173264                                                                                                                                                                                                                         │ embed-certs-173264           │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
	│ start   │ -p newest-cni-052144 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-052144            │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:36 UTC │
	│ addons  │ enable metrics-server -p newest-cni-052144 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-052144            │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │                     │
	│ stop    │ -p newest-cni-052144 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-052144            │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │ 25 Oct 25 09:36 UTC │
	│ addons  │ enable dashboard -p newest-cni-052144 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-052144            │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │ 25 Oct 25 09:36 UTC │
	│ start   │ -p newest-cni-052144 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-052144            │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │ 25 Oct 25 09:36 UTC │
	│ image   │ newest-cni-052144 image list --format=json                                                                                                                                                                                                    │ newest-cni-052144            │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │ 25 Oct 25 09:36 UTC │
	│ pause   │ -p newest-cni-052144 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-052144            │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-666079 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-666079 │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-666079 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-666079 │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │ 25 Oct 25 09:37 UTC │
	│ delete  │ -p newest-cni-052144                                                                                                                                                                                                                          │ newest-cni-052144            │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │ 25 Oct 25 09:36 UTC │
	│ delete  │ -p newest-cni-052144                                                                                                                                                                                                                          │ newest-cni-052144            │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │ 25 Oct 25 09:36 UTC │
	│ start   │ -p auto-068349 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-068349                  │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-666079 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-666079 │ jenkins │ v1.37.0 │ 25 Oct 25 09:37 UTC │ 25 Oct 25 09:37 UTC │
	│ start   │ -p default-k8s-diff-port-666079 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-666079 │ jenkins │ v1.37.0 │ 25 Oct 25 09:37 UTC │ 25 Oct 25 09:38 UTC │
	│ image   │ default-k8s-diff-port-666079 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-666079 │ jenkins │ v1.37.0 │ 25 Oct 25 09:38 UTC │ 25 Oct 25 09:38 UTC │
	│ pause   │ -p default-k8s-diff-port-666079 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-666079 │ jenkins │ v1.37.0 │ 25 Oct 25 09:38 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:37:05
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:37:05.556347  216293 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:37:05.556571  216293 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:37:05.556599  216293 out.go:374] Setting ErrFile to fd 2...
	I1025 09:37:05.556619  216293 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:37:05.556917  216293 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-2312/.minikube/bin
	I1025 09:37:05.557404  216293 out.go:368] Setting JSON to false
	I1025 09:37:05.558398  216293 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4777,"bootTime":1761380249,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1025 09:37:05.558493  216293 start.go:141] virtualization:  
	I1025 09:37:05.562025  216293 out.go:179] * [default-k8s-diff-port-666079] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 09:37:05.564947  216293 out.go:179]   - MINIKUBE_LOCATION=21796
	I1025 09:37:05.565046  216293 notify.go:220] Checking for updates...
	I1025 09:37:05.570836  216293 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:37:05.573862  216293 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21796-2312/kubeconfig
	I1025 09:37:05.576884  216293 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-2312/.minikube
	I1025 09:37:05.580271  216293 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 09:37:05.583177  216293 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:37:05.586655  216293 config.go:182] Loaded profile config "default-k8s-diff-port-666079": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:37:05.587333  216293 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:37:05.631866  216293 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 09:37:05.631983  216293 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:37:05.735249  216293 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-25 09:37:05.725368962 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:37:05.735356  216293 docker.go:318] overlay module found
	I1025 09:37:05.738628  216293 out.go:179] * Using the docker driver based on existing profile
	I1025 09:37:05.741466  216293 start.go:305] selected driver: docker
	I1025 09:37:05.741486  216293 start.go:925] validating driver "docker" against &{Name:default-k8s-diff-port-666079 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-666079 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:37:05.741593  216293 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:37:05.742559  216293 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:37:05.834799  216293 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-25 09:37:05.81995224 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:37:05.835128  216293 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:37:05.835160  216293 cni.go:84] Creating CNI manager for ""
	I1025 09:37:05.835217  216293 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:37:05.835255  216293 start.go:349] cluster config:
	{Name:default-k8s-diff-port-666079 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-666079 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:37:05.838712  216293 out.go:179] * Starting "default-k8s-diff-port-666079" primary control-plane node in "default-k8s-diff-port-666079" cluster
	I1025 09:37:05.841667  216293 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 09:37:05.844747  216293 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:37:05.847618  216293 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:37:05.847676  216293 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21796-2312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1025 09:37:05.847688  216293 cache.go:58] Caching tarball of preloaded images
	I1025 09:37:05.847774  216293 preload.go:233] Found /home/jenkins/minikube-integration/21796-2312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 09:37:05.847788  216293 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 09:37:05.847909  216293 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/config.json ...
	I1025 09:37:05.848137  216293 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:37:05.874875  216293 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 09:37:05.874895  216293 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 09:37:05.874908  216293 cache.go:232] Successfully downloaded all kic artifacts
	I1025 09:37:05.874930  216293 start.go:360] acquireMachinesLock for default-k8s-diff-port-666079: {Name:mk25f9f0a43388f7cdd9c3ecfcc6756ef82b00a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:37:05.874993  216293 start.go:364] duration metric: took 35.808µs to acquireMachinesLock for "default-k8s-diff-port-666079"
	I1025 09:37:05.875019  216293 start.go:96] Skipping create...Using existing machine configuration
	I1025 09:37:05.875026  216293 fix.go:54] fixHost starting: 
	I1025 09:37:05.875295  216293 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-666079 --format={{.State.Status}}
	I1025 09:37:05.897812  216293 fix.go:112] recreateIfNeeded on default-k8s-diff-port-666079: state=Stopped err=<nil>
	W1025 09:37:05.897842  216293 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 09:37:01.998401  214786 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21796-2312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-068349:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.436037396s)
	I1025 09:37:01.998435  214786 kic.go:203] duration metric: took 4.436168465s to extract preloaded images to volume ...
	W1025 09:37:01.998576  214786 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1025 09:37:01.998689  214786 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 09:37:02.055141  214786 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-068349 --name auto-068349 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-068349 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-068349 --network auto-068349 --ip 192.168.85.2 --volume auto-068349:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1025 09:37:02.366015  214786 cli_runner.go:164] Run: docker container inspect auto-068349 --format={{.State.Running}}
	I1025 09:37:02.386935  214786 cli_runner.go:164] Run: docker container inspect auto-068349 --format={{.State.Status}}
	I1025 09:37:02.406857  214786 cli_runner.go:164] Run: docker exec auto-068349 stat /var/lib/dpkg/alternatives/iptables
	I1025 09:37:02.459584  214786 oci.go:144] the created container "auto-068349" has a running status.
	I1025 09:37:02.459620  214786 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21796-2312/.minikube/machines/auto-068349/id_rsa...
	I1025 09:37:03.270702  214786 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21796-2312/.minikube/machines/auto-068349/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 09:37:03.289898  214786 cli_runner.go:164] Run: docker container inspect auto-068349 --format={{.State.Status}}
	I1025 09:37:03.308090  214786 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 09:37:03.308108  214786 kic_runner.go:114] Args: [docker exec --privileged auto-068349 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 09:37:03.351199  214786 cli_runner.go:164] Run: docker container inspect auto-068349 --format={{.State.Status}}
	I1025 09:37:03.368959  214786 machine.go:93] provisionDockerMachine start ...
	I1025 09:37:03.369059  214786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-068349
	I1025 09:37:03.387604  214786 main.go:141] libmachine: Using SSH client type: native
	I1025 09:37:03.387933  214786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1025 09:37:03.387948  214786 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:37:03.550506  214786 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-068349
	
	I1025 09:37:03.550531  214786 ubuntu.go:182] provisioning hostname "auto-068349"
	I1025 09:37:03.550643  214786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-068349
	I1025 09:37:03.572248  214786 main.go:141] libmachine: Using SSH client type: native
	I1025 09:37:03.572554  214786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1025 09:37:03.572579  214786 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-068349 && echo "auto-068349" | sudo tee /etc/hostname
	I1025 09:37:03.731544  214786 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-068349
	
	I1025 09:37:03.731680  214786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-068349
	I1025 09:37:03.748737  214786 main.go:141] libmachine: Using SSH client type: native
	I1025 09:37:03.749077  214786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1025 09:37:03.749102  214786 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-068349' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-068349/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-068349' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:37:03.898133  214786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:37:03.898158  214786 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21796-2312/.minikube CaCertPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21796-2312/.minikube}
	I1025 09:37:03.898178  214786 ubuntu.go:190] setting up certificates
	I1025 09:37:03.898187  214786 provision.go:84] configureAuth start
	I1025 09:37:03.898246  214786 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-068349
	I1025 09:37:03.917918  214786 provision.go:143] copyHostCerts
	I1025 09:37:03.918135  214786 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-2312/.minikube/ca.pem, removing ...
	I1025 09:37:03.918152  214786 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-2312/.minikube/ca.pem
	I1025 09:37:03.918230  214786 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21796-2312/.minikube/ca.pem (1082 bytes)
	I1025 09:37:03.918357  214786 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-2312/.minikube/cert.pem, removing ...
	I1025 09:37:03.918369  214786 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-2312/.minikube/cert.pem
	I1025 09:37:03.918400  214786 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21796-2312/.minikube/cert.pem (1123 bytes)
	I1025 09:37:03.918457  214786 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-2312/.minikube/key.pem, removing ...
	I1025 09:37:03.918467  214786 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-2312/.minikube/key.pem
	I1025 09:37:03.918493  214786 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21796-2312/.minikube/key.pem (1679 bytes)
	I1025 09:37:03.918543  214786 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21796-2312/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca-key.pem org=jenkins.auto-068349 san=[127.0.0.1 192.168.85.2 auto-068349 localhost minikube]
	I1025 09:37:04.337492  214786 provision.go:177] copyRemoteCerts
	I1025 09:37:04.337563  214786 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:37:04.337610  214786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-068349
	I1025 09:37:04.354350  214786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/auto-068349/id_rsa Username:docker}
	I1025 09:37:04.457776  214786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 09:37:04.474991  214786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1025 09:37:04.492594  214786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 09:37:04.509630  214786 provision.go:87] duration metric: took 611.419706ms to configureAuth
	I1025 09:37:04.509656  214786 ubuntu.go:206] setting minikube options for container-runtime
	I1025 09:37:04.509844  214786 config.go:182] Loaded profile config "auto-068349": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:37:04.509951  214786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-068349
	I1025 09:37:04.527260  214786 main.go:141] libmachine: Using SSH client type: native
	I1025 09:37:04.527568  214786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1025 09:37:04.527588  214786 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 09:37:04.789197  214786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:37:04.789220  214786 machine.go:96] duration metric: took 1.420239246s to provisionDockerMachine
	I1025 09:37:04.789231  214786 client.go:171] duration metric: took 7.864205677s to LocalClient.Create
	I1025 09:37:04.789249  214786 start.go:167] duration metric: took 7.864276652s to libmachine.API.Create "auto-068349"
	I1025 09:37:04.789256  214786 start.go:293] postStartSetup for "auto-068349" (driver="docker")
	I1025 09:37:04.789266  214786 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:37:04.789328  214786 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:37:04.789378  214786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-068349
	I1025 09:37:04.807853  214786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/auto-068349/id_rsa Username:docker}
	I1025 09:37:04.918293  214786 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:37:04.924273  214786 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 09:37:04.924356  214786 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 09:37:04.924382  214786 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-2312/.minikube/addons for local assets ...
	I1025 09:37:04.924467  214786 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-2312/.minikube/files for local assets ...
	I1025 09:37:04.924587  214786 filesync.go:149] local asset: /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem -> 41102.pem in /etc/ssl/certs
	I1025 09:37:04.924737  214786 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 09:37:04.935394  214786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem --> /etc/ssl/certs/41102.pem (1708 bytes)
	I1025 09:37:04.959489  214786 start.go:296] duration metric: took 170.219238ms for postStartSetup
	I1025 09:37:04.959841  214786 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-068349
	I1025 09:37:04.983380  214786 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/config.json ...
	I1025 09:37:04.983790  214786 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:37:04.983879  214786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-068349
	I1025 09:37:05.006808  214786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/auto-068349/id_rsa Username:docker}
	I1025 09:37:05.119036  214786 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 09:37:05.125621  214786 start.go:128] duration metric: took 8.204166061s to createHost
	I1025 09:37:05.125645  214786 start.go:83] releasing machines lock for "auto-068349", held for 8.204297623s
	I1025 09:37:05.125717  214786 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-068349
	I1025 09:37:05.149090  214786 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:37:05.149090  214786 ssh_runner.go:195] Run: cat /version.json
	I1025 09:37:05.149169  214786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-068349
	I1025 09:37:05.149191  214786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-068349
	I1025 09:37:05.181465  214786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/auto-068349/id_rsa Username:docker}
	I1025 09:37:05.186129  214786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/auto-068349/id_rsa Username:docker}
	I1025 09:37:05.322224  214786 ssh_runner.go:195] Run: systemctl --version
	I1025 09:37:05.424747  214786 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:37:05.477612  214786 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:37:05.482502  214786 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:37:05.482569  214786 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:37:05.524275  214786 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1025 09:37:05.524299  214786 start.go:495] detecting cgroup driver to use...
	I1025 09:37:05.524329  214786 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 09:37:05.524379  214786 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:37:05.546799  214786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:37:05.560575  214786 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:37:05.560634  214786 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:37:05.582509  214786 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:37:05.602101  214786 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:37:05.780302  214786 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:37:05.962364  214786 docker.go:234] disabling docker service ...
	I1025 09:37:05.962428  214786 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:37:05.991497  214786 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:37:06.029024  214786 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:37:06.186264  214786 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:37:06.375698  214786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:37:06.389558  214786 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:37:06.404437  214786 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 09:37:06.404504  214786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:37:06.413271  214786 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 09:37:06.413338  214786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:37:06.422236  214786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:37:06.431640  214786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:37:06.440541  214786 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:37:06.449169  214786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:37:06.458477  214786 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:37:06.471835  214786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:37:06.480561  214786 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:37:06.493559  214786 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:37:06.507213  214786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:37:06.676781  214786 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:37:06.878678  214786 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:37:06.878748  214786 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:37:06.882913  214786 start.go:563] Will wait 60s for crictl version
	I1025 09:37:06.882986  214786 ssh_runner.go:195] Run: which crictl
	I1025 09:37:06.886363  214786 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 09:37:06.916783  214786 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 09:37:06.916871  214786 ssh_runner.go:195] Run: crio --version
	I1025 09:37:06.957202  214786 ssh_runner.go:195] Run: crio --version
	I1025 09:37:06.998836  214786 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 09:37:07.002243  214786 cli_runner.go:164] Run: docker network inspect auto-068349 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:37:07.028098  214786 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1025 09:37:07.032483  214786 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:37:07.042887  214786 kubeadm.go:883] updating cluster {Name:auto-068349 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-068349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 09:37:07.042998  214786 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:37:07.043061  214786 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:37:07.079203  214786 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:37:07.079230  214786 crio.go:433] Images already preloaded, skipping extraction
	I1025 09:37:07.079286  214786 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:37:07.105079  214786 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:37:07.105105  214786 cache_images.go:85] Images are preloaded, skipping loading
	I1025 09:37:07.105112  214786 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1025 09:37:07.105195  214786 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-068349 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-068349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 09:37:07.105274  214786 ssh_runner.go:195] Run: crio config
	I1025 09:37:07.165651  214786 cni.go:84] Creating CNI manager for ""
	I1025 09:37:07.165673  214786 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:37:07.165696  214786 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 09:37:07.165719  214786 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-068349 NodeName:auto-068349 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:37:07.165847  214786 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-068349"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:37:07.165921  214786 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 09:37:07.175359  214786 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 09:37:07.175422  214786 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:37:07.182794  214786 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1025 09:37:07.195797  214786 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:37:07.208297  214786 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1025 09:37:07.221005  214786 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1025 09:37:07.224415  214786 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:37:07.234163  214786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:37:07.346657  214786 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:37:07.363018  214786 certs.go:69] Setting up /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349 for IP: 192.168.85.2
	I1025 09:37:07.363043  214786 certs.go:195] generating shared ca certs ...
	I1025 09:37:07.363059  214786 certs.go:227] acquiring lock for ca certs: {Name:mke81a91ad41248e67f7acba9fb2e71e1c110e7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:37:07.363244  214786 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21796-2312/.minikube/ca.key
	I1025 09:37:07.363307  214786 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.key
	I1025 09:37:07.363323  214786 certs.go:257] generating profile certs ...
	I1025 09:37:07.363394  214786 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/client.key
	I1025 09:37:07.363412  214786 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/client.crt with IP's: []
	I1025 09:37:08.118354  214786 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/client.crt ...
	I1025 09:37:08.118386  214786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/client.crt: {Name:mkf3844481f6d137a348604bf759496511bf005d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:37:08.118576  214786 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/client.key ...
	I1025 09:37:08.118590  214786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/client.key: {Name:mk1abe8e96987e07432d85c390d8053756e56039 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:37:08.118676  214786 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/apiserver.key.a6755b95
	I1025 09:37:08.118695  214786 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/apiserver.crt.a6755b95 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1025 09:37:08.389428  214786 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/apiserver.crt.a6755b95 ...
	I1025 09:37:08.389459  214786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/apiserver.crt.a6755b95: {Name:mk639d44540e200e77566e7443b2a7036ce15252 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:37:08.389669  214786 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/apiserver.key.a6755b95 ...
	I1025 09:37:08.389687  214786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/apiserver.key.a6755b95: {Name:mk82258b28cbf222f1bea0faa41cf6fb2fd0b04d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:37:08.389778  214786 certs.go:382] copying /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/apiserver.crt.a6755b95 -> /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/apiserver.crt
	I1025 09:37:08.389862  214786 certs.go:386] copying /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/apiserver.key.a6755b95 -> /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/apiserver.key
	I1025 09:37:08.389925  214786 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/proxy-client.key
	I1025 09:37:08.389943  214786 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/proxy-client.crt with IP's: []
	I1025 09:37:09.226693  214786 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/proxy-client.crt ...
	I1025 09:37:09.226725  214786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/proxy-client.crt: {Name:mk5eb8d32a06c15e8d3db32c80e10bda280eeef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:37:09.226912  214786 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/proxy-client.key ...
	I1025 09:37:09.226925  214786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/proxy-client.key: {Name:mk08771d4ef8be7e5e66fe2543d759828932c2c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:37:09.227121  214786 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/4110.pem (1338 bytes)
	W1025 09:37:09.227163  214786 certs.go:480] ignoring /home/jenkins/minikube-integration/21796-2312/.minikube/certs/4110_empty.pem, impossibly tiny 0 bytes
	I1025 09:37:09.227176  214786 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 09:37:09.227200  214786 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem (1082 bytes)
	I1025 09:37:09.227230  214786 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:37:09.227253  214786 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/key.pem (1679 bytes)
	I1025 09:37:09.227300  214786 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem (1708 bytes)
	I1025 09:37:09.227895  214786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:37:09.246105  214786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 09:37:09.264798  214786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:37:09.284256  214786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 09:37:09.302186  214786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1025 09:37:09.328562  214786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 09:37:09.350260  214786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:37:09.372588  214786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 09:37:09.394670  214786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:37:09.416597  214786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/certs/4110.pem --> /usr/share/ca-certificates/4110.pem (1338 bytes)
	I1025 09:37:09.438597  214786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem --> /usr/share/ca-certificates/41102.pem (1708 bytes)
	I1025 09:37:09.457637  214786 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:37:09.474322  214786 ssh_runner.go:195] Run: openssl version
	I1025 09:37:09.482773  214786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:37:09.493261  214786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:37:09.497595  214786 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:37:09.497658  214786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:37:09.545330  214786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:37:09.554743  214786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4110.pem && ln -fs /usr/share/ca-certificates/4110.pem /etc/ssl/certs/4110.pem"
	I1025 09:37:09.564417  214786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4110.pem
	I1025 09:37:09.569037  214786 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 08:37 /usr/share/ca-certificates/4110.pem
	I1025 09:37:09.569179  214786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4110.pem
	I1025 09:37:09.611550  214786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4110.pem /etc/ssl/certs/51391683.0"
	I1025 09:37:09.619733  214786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41102.pem && ln -fs /usr/share/ca-certificates/41102.pem /etc/ssl/certs/41102.pem"
	I1025 09:37:09.627830  214786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41102.pem
	I1025 09:37:09.631508  214786 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 08:37 /usr/share/ca-certificates/41102.pem
	I1025 09:37:09.631588  214786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41102.pem
	I1025 09:37:09.688831  214786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41102.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 09:37:09.709544  214786 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:37:09.714354  214786 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 09:37:09.714404  214786 kubeadm.go:400] StartCluster: {Name:auto-068349 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-068349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:37:09.714478  214786 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:37:09.714541  214786 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:37:09.756168  214786 cri.go:89] found id: ""
	I1025 09:37:09.756241  214786 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 09:37:09.773781  214786 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 09:37:09.783182  214786 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1025 09:37:09.783243  214786 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 09:37:09.792606  214786 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 09:37:09.792623  214786 kubeadm.go:157] found existing configuration files:
	
	I1025 09:37:09.792674  214786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 09:37:09.805260  214786 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 09:37:09.805319  214786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 09:37:09.812827  214786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 09:37:09.824611  214786 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 09:37:09.824735  214786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 09:37:09.831870  214786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 09:37:09.840201  214786 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 09:37:09.840265  214786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 09:37:09.847533  214786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 09:37:09.855010  214786 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 09:37:09.855074  214786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 09:37:09.862251  214786 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 09:37:09.916967  214786 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1025 09:37:09.917322  214786 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 09:37:09.954712  214786 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1025 09:37:09.954993  214786 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1025 09:37:09.955039  214786 kubeadm.go:318] OS: Linux
	I1025 09:37:09.955092  214786 kubeadm.go:318] CGROUPS_CPU: enabled
	I1025 09:37:09.955146  214786 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1025 09:37:09.955199  214786 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1025 09:37:09.955252  214786 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1025 09:37:09.955306  214786 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1025 09:37:09.955374  214786 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1025 09:37:09.955425  214786 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1025 09:37:09.955478  214786 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1025 09:37:09.955530  214786 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1025 09:37:10.039898  214786 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 09:37:10.040020  214786 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 09:37:10.040120  214786 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 09:37:10.054523  214786 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 09:37:05.901113  216293 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-666079" ...
	I1025 09:37:05.901193  216293 cli_runner.go:164] Run: docker start default-k8s-diff-port-666079
	I1025 09:37:06.220400  216293 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-666079 --format={{.State.Status}}
	I1025 09:37:06.238958  216293 kic.go:430] container "default-k8s-diff-port-666079" state is running.
	I1025 09:37:06.239351  216293 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-666079
	I1025 09:37:06.268847  216293 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/config.json ...
	I1025 09:37:06.269082  216293 machine.go:93] provisionDockerMachine start ...
	I1025 09:37:06.269143  216293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-666079
	I1025 09:37:06.298473  216293 main.go:141] libmachine: Using SSH client type: native
	I1025 09:37:06.298786  216293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1025 09:37:06.298795  216293 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:37:06.301314  216293 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1025 09:37:09.462103  216293 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-666079
	
	I1025 09:37:09.462183  216293 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-666079"
	I1025 09:37:09.462321  216293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-666079
	I1025 09:37:09.487256  216293 main.go:141] libmachine: Using SSH client type: native
	I1025 09:37:09.487565  216293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1025 09:37:09.487577  216293 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-666079 && echo "default-k8s-diff-port-666079" | sudo tee /etc/hostname
	I1025 09:37:09.660089  216293 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-666079
	
	I1025 09:37:09.660196  216293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-666079
	I1025 09:37:09.690703  216293 main.go:141] libmachine: Using SSH client type: native
	I1025 09:37:09.691010  216293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1025 09:37:09.691034  216293 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-666079' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-666079/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-666079' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:37:09.870562  216293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:37:09.870591  216293 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21796-2312/.minikube CaCertPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21796-2312/.minikube}
	I1025 09:37:09.870621  216293 ubuntu.go:190] setting up certificates
	I1025 09:37:09.870631  216293 provision.go:84] configureAuth start
	I1025 09:37:09.870691  216293 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-666079
	I1025 09:37:09.895823  216293 provision.go:143] copyHostCerts
	I1025 09:37:09.895892  216293 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-2312/.minikube/ca.pem, removing ...
	I1025 09:37:09.895912  216293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-2312/.minikube/ca.pem
	I1025 09:37:09.895995  216293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21796-2312/.minikube/ca.pem (1082 bytes)
	I1025 09:37:09.896102  216293 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-2312/.minikube/cert.pem, removing ...
	I1025 09:37:09.896111  216293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-2312/.minikube/cert.pem
	I1025 09:37:09.896137  216293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21796-2312/.minikube/cert.pem (1123 bytes)
	I1025 09:37:09.896195  216293 exec_runner.go:144] found /home/jenkins/minikube-integration/21796-2312/.minikube/key.pem, removing ...
	I1025 09:37:09.896203  216293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21796-2312/.minikube/key.pem
	I1025 09:37:09.896228  216293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21796-2312/.minikube/key.pem (1679 bytes)
	I1025 09:37:09.896279  216293 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21796-2312/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-666079 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-666079 localhost minikube]
	I1025 09:37:10.478822  216293 provision.go:177] copyRemoteCerts
	I1025 09:37:10.478942  216293 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:37:10.479048  216293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-666079
	I1025 09:37:10.497083  216293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/default-k8s-diff-port-666079/id_rsa Username:docker}
	I1025 09:37:10.061200  214786 out.go:252]   - Generating certificates and keys ...
	I1025 09:37:10.061293  214786 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 09:37:10.061374  214786 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 09:37:10.986145  214786 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 09:37:11.195317  214786 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 09:37:10.610348  216293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 09:37:10.631714  216293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1025 09:37:10.650875  216293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 09:37:10.669798  216293 provision.go:87] duration metric: took 799.137725ms to configureAuth
	I1025 09:37:10.669871  216293 ubuntu.go:206] setting minikube options for container-runtime
	I1025 09:37:10.670130  216293 config.go:182] Loaded profile config "default-k8s-diff-port-666079": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:37:10.670279  216293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-666079
	I1025 09:37:10.688741  216293 main.go:141] libmachine: Using SSH client type: native
	I1025 09:37:10.689057  216293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1025 09:37:10.689072  216293 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 09:37:11.023677  216293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:37:11.023766  216293 machine.go:96] duration metric: took 4.754675305s to provisionDockerMachine
	I1025 09:37:11.023791  216293 start.go:293] postStartSetup for "default-k8s-diff-port-666079" (driver="docker")
	I1025 09:37:11.023817  216293 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:37:11.023947  216293 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:37:11.024030  216293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-666079
	I1025 09:37:11.043579  216293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/default-k8s-diff-port-666079/id_rsa Username:docker}
	I1025 09:37:11.159762  216293 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:37:11.164153  216293 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 09:37:11.164226  216293 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 09:37:11.164254  216293 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-2312/.minikube/addons for local assets ...
	I1025 09:37:11.164346  216293 filesync.go:126] Scanning /home/jenkins/minikube-integration/21796-2312/.minikube/files for local assets ...
	I1025 09:37:11.164472  216293 filesync.go:149] local asset: /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem -> 41102.pem in /etc/ssl/certs
	I1025 09:37:11.164637  216293 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 09:37:11.173342  216293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem --> /etc/ssl/certs/41102.pem (1708 bytes)
	I1025 09:37:11.194236  216293 start.go:296] duration metric: took 170.416469ms for postStartSetup
	I1025 09:37:11.194359  216293 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:37:11.194429  216293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-666079
	I1025 09:37:11.219034  216293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/default-k8s-diff-port-666079/id_rsa Username:docker}
	I1025 09:37:11.328574  216293 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 09:37:11.334368  216293 fix.go:56] duration metric: took 5.459334092s for fixHost
	I1025 09:37:11.334402  216293 start.go:83] releasing machines lock for "default-k8s-diff-port-666079", held for 5.45939335s
	I1025 09:37:11.334482  216293 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-666079
	I1025 09:37:11.361690  216293 ssh_runner.go:195] Run: cat /version.json
	I1025 09:37:11.361761  216293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-666079
	I1025 09:37:11.362032  216293 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:37:11.362134  216293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-666079
	I1025 09:37:11.403456  216293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/default-k8s-diff-port-666079/id_rsa Username:docker}
	I1025 09:37:11.405604  216293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/default-k8s-diff-port-666079/id_rsa Username:docker}
	I1025 09:37:11.604502  216293 ssh_runner.go:195] Run: systemctl --version
	I1025 09:37:11.613198  216293 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:37:11.666871  216293 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:37:11.673336  216293 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:37:11.673440  216293 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:37:11.684646  216293 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 09:37:11.684691  216293 start.go:495] detecting cgroup driver to use...
	I1025 09:37:11.684754  216293 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 09:37:11.684817  216293 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:37:11.704647  216293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:37:11.723195  216293 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:37:11.723286  216293 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:37:11.744196  216293 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:37:11.761759  216293 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:37:11.910596  216293 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:37:12.078286  216293 docker.go:234] disabling docker service ...
	I1025 09:37:12.078366  216293 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:37:12.099815  216293 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:37:12.114772  216293 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:37:12.277359  216293 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:37:12.433182  216293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:37:12.451291  216293 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:37:12.467778  216293 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 09:37:12.467870  216293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:37:12.477619  216293 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 09:37:12.477696  216293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:37:12.488289  216293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:37:12.498792  216293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:37:12.510131  216293 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:37:12.520653  216293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:37:12.531799  216293 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:37:12.542026  216293 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:37:12.552254  216293 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:37:12.560594  216293 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:37:12.569486  216293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:37:12.752285  216293 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:37:12.898781  216293 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:37:12.898874  216293 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:37:12.903539  216293 start.go:563] Will wait 60s for crictl version
	I1025 09:37:12.903638  216293 ssh_runner.go:195] Run: which crictl
	I1025 09:37:12.907869  216293 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 09:37:12.938916  216293 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 09:37:12.939036  216293 ssh_runner.go:195] Run: crio --version
	I1025 09:37:12.969908  216293 ssh_runner.go:195] Run: crio --version
	I1025 09:37:13.009399  216293 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 09:37:13.012380  216293 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-666079 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:37:13.031651  216293 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1025 09:37:13.036092  216293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:37:13.045784  216293 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-666079 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-666079 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 09:37:13.045897  216293 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:37:13.045962  216293 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:37:13.082477  216293 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:37:13.082505  216293 crio.go:433] Images already preloaded, skipping extraction
	I1025 09:37:13.082559  216293 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:37:13.120320  216293 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:37:13.120346  216293 cache_images.go:85] Images are preloaded, skipping loading
	I1025 09:37:13.120355  216293 kubeadm.go:934] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1025 09:37:13.120447  216293 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-666079 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-666079 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 09:37:13.120520  216293 ssh_runner.go:195] Run: crio config
	I1025 09:37:13.232148  216293 cni.go:84] Creating CNI manager for ""
	I1025 09:37:13.232176  216293 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:37:13.232207  216293 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 09:37:13.232240  216293 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-666079 NodeName:default-k8s-diff-port-666079 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:37:13.232412  216293 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-666079"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:37:13.232513  216293 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 09:37:13.241266  216293 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 09:37:13.241351  216293 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:37:13.251310  216293 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1025 09:37:13.264094  216293 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:37:13.276611  216293 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1025 09:37:13.289240  216293 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1025 09:37:13.292811  216293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:37:13.301999  216293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:37:13.448898  216293 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:37:13.465098  216293 certs.go:69] Setting up /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079 for IP: 192.168.76.2
	I1025 09:37:13.465175  216293 certs.go:195] generating shared ca certs ...
	I1025 09:37:13.465208  216293 certs.go:227] acquiring lock for ca certs: {Name:mke81a91ad41248e67f7acba9fb2e71e1c110e7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:37:13.465395  216293 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21796-2312/.minikube/ca.key
	I1025 09:37:13.465462  216293 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.key
	I1025 09:37:13.465485  216293 certs.go:257] generating profile certs ...
	I1025 09:37:13.465617  216293 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/client.key
	I1025 09:37:13.465722  216293 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/apiserver.key.f342de6b
	I1025 09:37:13.465786  216293 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/proxy-client.key
	I1025 09:37:13.465937  216293 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/4110.pem (1338 bytes)
	W1025 09:37:13.466015  216293 certs.go:480] ignoring /home/jenkins/minikube-integration/21796-2312/.minikube/certs/4110_empty.pem, impossibly tiny 0 bytes
	I1025 09:37:13.466045  216293 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 09:37:13.466091  216293 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/ca.pem (1082 bytes)
	I1025 09:37:13.466147  216293 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:37:13.466193  216293 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/certs/key.pem (1679 bytes)
	I1025 09:37:13.466276  216293 certs.go:484] found cert: /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem (1708 bytes)
	I1025 09:37:13.466914  216293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:37:13.527190  216293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 09:37:13.569557  216293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:37:13.588582  216293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 09:37:13.607646  216293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1025 09:37:13.626657  216293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 09:37:13.647731  216293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:37:13.672826  216293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 09:37:13.698140  216293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:37:13.797649  216293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/certs/4110.pem --> /usr/share/ca-certificates/4110.pem (1338 bytes)
	I1025 09:37:13.844132  216293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/ssl/certs/41102.pem --> /usr/share/ca-certificates/41102.pem (1708 bytes)
	I1025 09:37:13.864651  216293 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:37:13.878204  216293 ssh_runner.go:195] Run: openssl version
	I1025 09:37:13.885205  216293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:37:13.893696  216293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:37:13.898094  216293 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:37:13.898161  216293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:37:13.939701  216293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:37:13.947652  216293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4110.pem && ln -fs /usr/share/ca-certificates/4110.pem /etc/ssl/certs/4110.pem"
	I1025 09:37:13.956023  216293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4110.pem
	I1025 09:37:13.960596  216293 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 08:37 /usr/share/ca-certificates/4110.pem
	I1025 09:37:13.960677  216293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4110.pem
	I1025 09:37:14.004399  216293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4110.pem /etc/ssl/certs/51391683.0"
	I1025 09:37:14.013765  216293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41102.pem && ln -fs /usr/share/ca-certificates/41102.pem /etc/ssl/certs/41102.pem"
	I1025 09:37:14.023028  216293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41102.pem
	I1025 09:37:14.027658  216293 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 08:37 /usr/share/ca-certificates/41102.pem
	I1025 09:37:14.027735  216293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41102.pem
	I1025 09:37:14.070778  216293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41102.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 09:37:14.079303  216293 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:37:14.084005  216293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 09:37:14.126843  216293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 09:37:14.168425  216293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 09:37:14.211765  216293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 09:37:14.276559  216293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 09:37:14.354252  216293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 09:37:14.460340  216293 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-666079 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-666079 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:37:14.460443  216293 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:37:14.460527  216293 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:37:14.604929  216293 cri.go:89] found id: "c26bf38fd7e4b9f51947f954e4ee102888ffd02a198adb203972580c4eb3c74d"
	I1025 09:37:14.604966  216293 cri.go:89] found id: "93c1d103bf05eb8996db42684ab453c3e8a59e4287467d1fb344225e54155651"
	I1025 09:37:14.604972  216293 cri.go:89] found id: "fe95bac5f1e76131716e125587dd727d7db7bdabeed57b1078cc75158bc0da09"
	I1025 09:37:14.604985  216293 cri.go:89] found id: "36dbd5d0fba8fd463698b1cfb95820a97032c9e08ee3218bc4e23d5db821fa62"
	I1025 09:37:14.604988  216293 cri.go:89] found id: ""
	I1025 09:37:14.605053  216293 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 09:37:14.625213  216293 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:37:14Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:37:14.625301  216293 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 09:37:14.670352  216293 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 09:37:14.670374  216293 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 09:37:14.670433  216293 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 09:37:14.702662  216293 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 09:37:14.703129  216293 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-666079" does not appear in /home/jenkins/minikube-integration/21796-2312/kubeconfig
	I1025 09:37:14.703260  216293 kubeconfig.go:62] /home/jenkins/minikube-integration/21796-2312/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-666079" cluster setting kubeconfig missing "default-k8s-diff-port-666079" context setting]
	I1025 09:37:14.703591  216293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/kubeconfig: {Name:mk29749a0edbb941c188588ea6aef25a517ab571 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:37:14.705195  216293 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 09:37:14.723722  216293 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1025 09:37:14.723766  216293 kubeadm.go:601] duration metric: took 53.38553ms to restartPrimaryControlPlane
	I1025 09:37:14.723775  216293 kubeadm.go:402] duration metric: took 263.459372ms to StartCluster
	I1025 09:37:14.723789  216293 settings.go:142] acquiring lock: {Name:mk0d8a31751f5e359928da7d7271367bd4b397fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:37:14.723864  216293 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21796-2312/kubeconfig
	I1025 09:37:14.724539  216293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/kubeconfig: {Name:mk29749a0edbb941c188588ea6aef25a517ab571 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:37:14.724768  216293 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:37:14.725144  216293 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 09:37:14.725223  216293 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-666079"
	I1025 09:37:14.725243  216293 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-666079"
	W1025 09:37:14.725255  216293 addons.go:247] addon storage-provisioner should already be in state true
	I1025 09:37:14.725276  216293 host.go:66] Checking if "default-k8s-diff-port-666079" exists ...
	I1025 09:37:14.725729  216293 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-666079 --format={{.State.Status}}
	I1025 09:37:14.726152  216293 config.go:182] Loaded profile config "default-k8s-diff-port-666079": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:37:14.726221  216293 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-666079"
	I1025 09:37:14.726239  216293 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-666079"
	I1025 09:37:14.726515  216293 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-666079 --format={{.State.Status}}
	I1025 09:37:14.726668  216293 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-666079"
	I1025 09:37:14.726712  216293 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-666079"
	W1025 09:37:14.726742  216293 addons.go:247] addon dashboard should already be in state true
	I1025 09:37:14.726777  216293 host.go:66] Checking if "default-k8s-diff-port-666079" exists ...
	I1025 09:37:14.727246  216293 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-666079 --format={{.State.Status}}
	I1025 09:37:14.737646  216293 out.go:179] * Verifying Kubernetes components...
	I1025 09:37:14.753453  216293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:37:14.787798  216293 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 09:37:14.790736  216293 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-666079"
	W1025 09:37:14.790814  216293 addons.go:247] addon default-storageclass should already be in state true
	I1025 09:37:14.790846  216293 host.go:66] Checking if "default-k8s-diff-port-666079" exists ...
	I1025 09:37:14.791258  216293 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-666079 --format={{.State.Status}}
	I1025 09:37:14.791436  216293 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:37:14.791449  216293 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 09:37:14.791486  216293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-666079
	I1025 09:37:14.791942  216293 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1025 09:37:14.794996  216293 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1025 09:37:14.797904  216293 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1025 09:37:14.797924  216293 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1025 09:37:14.798105  216293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-666079
	I1025 09:37:14.839619  216293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/default-k8s-diff-port-666079/id_rsa Username:docker}
	I1025 09:37:14.847927  216293 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 09:37:14.847952  216293 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 09:37:14.848010  216293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-666079
	I1025 09:37:14.859606  216293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/default-k8s-diff-port-666079/id_rsa Username:docker}
	I1025 09:37:14.877708  216293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/default-k8s-diff-port-666079/id_rsa Username:docker}
	I1025 09:37:15.275637  216293 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 09:37:15.319240  216293 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:37:15.341709  216293 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:37:15.355571  216293 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1025 09:37:15.355643  216293 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1025 09:37:15.461358  216293 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1025 09:37:15.461429  216293 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1025 09:37:12.254434  214786 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 09:37:12.590267  214786 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 09:37:13.058351  214786 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 09:37:13.058492  214786 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [auto-068349 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1025 09:37:13.755270  214786 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 09:37:13.755816  214786 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [auto-068349 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1025 09:37:15.209347  214786 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 09:37:16.265762  214786 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 09:37:16.613439  214786 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 09:37:16.613914  214786 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 09:37:17.356261  214786 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 09:37:17.996076  214786 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1025 09:37:18.326350  214786 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 09:37:19.552967  214786 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 09:37:20.143960  214786 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 09:37:20.145123  214786 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 09:37:20.148160  214786 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 09:37:15.576548  216293 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1025 09:37:15.576623  216293 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1025 09:37:15.671385  216293 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1025 09:37:15.671460  216293 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1025 09:37:15.739899  216293 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1025 09:37:15.739974  216293 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1025 09:37:15.777663  216293 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1025 09:37:15.777724  216293 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1025 09:37:15.814982  216293 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1025 09:37:15.815057  216293 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1025 09:37:15.856516  216293 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1025 09:37:15.856578  216293 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1025 09:37:15.893678  216293 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 09:37:15.893752  216293 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1025 09:37:15.930405  216293 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 09:37:20.151446  214786 out.go:252]   - Booting up control plane ...
	I1025 09:37:20.151555  214786 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 09:37:20.151637  214786 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 09:37:20.152873  214786 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 09:37:20.198092  214786 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 09:37:20.198205  214786 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1025 09:37:20.210370  214786 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1025 09:37:20.215365  214786 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 09:37:20.215812  214786 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1025 09:37:20.411037  214786 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1025 09:37:20.411161  214786 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1025 09:37:22.136606  216293 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.860938426s)
	I1025 09:37:22.136722  216293 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.817411017s)
	I1025 09:37:22.136800  216293 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-666079" to be "Ready" ...
	I1025 09:37:22.176493  216293 node_ready.go:49] node "default-k8s-diff-port-666079" is "Ready"
	I1025 09:37:22.176574  216293 node_ready.go:38] duration metric: took 39.760169ms for node "default-k8s-diff-port-666079" to be "Ready" ...
	I1025 09:37:22.176603  216293 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:37:22.176698  216293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:37:24.561612  216293 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.219831454s)
	I1025 09:37:24.790650  216293 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.860156831s)
	I1025 09:37:24.790838  216293 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.614108545s)
	I1025 09:37:24.790859  216293 api_server.go:72] duration metric: took 10.066064389s to wait for apiserver process to appear ...
	I1025 09:37:24.790865  216293 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:37:24.790903  216293 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1025 09:37:24.793832  216293 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-666079 addons enable metrics-server
	
	I1025 09:37:24.796624  216293 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1025 09:37:24.799451  216293 addons.go:514] duration metric: took 10.074301486s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1025 09:37:24.807424  216293 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1025 09:37:24.810071  216293 api_server.go:141] control plane version: v1.34.1
	I1025 09:37:24.810098  216293 api_server.go:131] duration metric: took 19.207412ms to wait for apiserver health ...
	I1025 09:37:24.810108  216293 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 09:37:24.832250  216293 system_pods.go:59] 8 kube-system pods found
	I1025 09:37:24.832294  216293 system_pods.go:61] "coredns-66bc5c9577-dzmkq" [991a35a6-4303-41e2-b7f8-3d267c5fc2ec] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:37:24.832309  216293 system_pods.go:61] "etcd-default-k8s-diff-port-666079" [48199b7d-3848-41a8-ac12-69176ce87480] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 09:37:24.832315  216293 system_pods.go:61] "kindnet-28vnv" [7efe42d1-6ccc-4898-8927-11f06d512ee1] Running
	I1025 09:37:24.832322  216293 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-666079" [ca0bef19-f719-4f47-b127-ea12a2faa23d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 09:37:24.832330  216293 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-666079" [c17326de-704e-4ba5-ac46-316407ce00f7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 09:37:24.832342  216293 system_pods.go:61] "kube-proxy-65j7p" [e9d046e5-ee7b-43a7-b854-2597df0f1432] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1025 09:37:24.832349  216293 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-666079" [99eaee03-92e9-47a5-8e65-8c20a7a00558] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 09:37:24.832364  216293 system_pods.go:61] "storage-provisioner" [ded1f77d-7f3d-48f0-94ef-13367b475def] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:37:24.832371  216293 system_pods.go:74] duration metric: took 22.256374ms to wait for pod list to return data ...
	I1025 09:37:24.832384  216293 default_sa.go:34] waiting for default service account to be created ...
	I1025 09:37:24.852452  216293 default_sa.go:45] found service account: "default"
	I1025 09:37:24.852481  216293 default_sa.go:55] duration metric: took 20.091106ms for default service account to be created ...
	I1025 09:37:24.852491  216293 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 09:37:24.931080  216293 system_pods.go:86] 8 kube-system pods found
	I1025 09:37:24.931125  216293 system_pods.go:89] "coredns-66bc5c9577-dzmkq" [991a35a6-4303-41e2-b7f8-3d267c5fc2ec] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:37:24.931136  216293 system_pods.go:89] "etcd-default-k8s-diff-port-666079" [48199b7d-3848-41a8-ac12-69176ce87480] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 09:37:24.931142  216293 system_pods.go:89] "kindnet-28vnv" [7efe42d1-6ccc-4898-8927-11f06d512ee1] Running
	I1025 09:37:24.931155  216293 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-666079" [ca0bef19-f719-4f47-b127-ea12a2faa23d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 09:37:24.931171  216293 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-666079" [c17326de-704e-4ba5-ac46-316407ce00f7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 09:37:24.931189  216293 system_pods.go:89] "kube-proxy-65j7p" [e9d046e5-ee7b-43a7-b854-2597df0f1432] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1025 09:37:24.931201  216293 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-666079" [99eaee03-92e9-47a5-8e65-8c20a7a00558] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 09:37:24.931208  216293 system_pods.go:89] "storage-provisioner" [ded1f77d-7f3d-48f0-94ef-13367b475def] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:37:24.931222  216293 system_pods.go:126] duration metric: took 78.726137ms to wait for k8s-apps to be running ...
	I1025 09:37:24.931231  216293 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 09:37:24.931298  216293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:37:24.964502  216293 system_svc.go:56] duration metric: took 33.262733ms WaitForService to wait for kubelet
	I1025 09:37:24.964541  216293 kubeadm.go:586] duration metric: took 10.23973488s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:37:24.964561  216293 node_conditions.go:102] verifying NodePressure condition ...
	I1025 09:37:24.986728  216293 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 09:37:24.986763  216293 node_conditions.go:123] node cpu capacity is 2
	I1025 09:37:24.986776  216293 node_conditions.go:105] duration metric: took 22.209858ms to run NodePressure ...
	I1025 09:37:24.986798  216293 start.go:241] waiting for startup goroutines ...
	I1025 09:37:24.986809  216293 start.go:246] waiting for cluster config update ...
	I1025 09:37:24.986826  216293 start.go:255] writing updated cluster config ...
	I1025 09:37:24.987144  216293 ssh_runner.go:195] Run: rm -f paused
	I1025 09:37:24.993944  216293 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:37:25.039608  216293 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dzmkq" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:37:21.920941  214786 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.500832072s
	I1025 09:37:21.922341  214786 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1025 09:37:21.922715  214786 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1025 09:37:21.923577  214786 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1025 09:37:21.924166  214786 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1025 09:37:27.050559  216293 pod_ready.go:104] pod "coredns-66bc5c9577-dzmkq" is not "Ready", error: <nil>
	W1025 09:37:29.546643  216293 pod_ready.go:104] pod "coredns-66bc5c9577-dzmkq" is not "Ready", error: <nil>
	I1025 09:37:27.376100  214786 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 5.452075196s
	I1025 09:37:30.246995  214786 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 8.322380868s
	I1025 09:37:32.425840  214786 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 10.502647168s
	I1025 09:37:32.451373  214786 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 09:37:32.470313  214786 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 09:37:32.487492  214786 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 09:37:32.487707  214786 kubeadm.go:318] [mark-control-plane] Marking the node auto-068349 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 09:37:32.504207  214786 kubeadm.go:318] [bootstrap-token] Using token: v8cdem.gfd8enqpbrf2mgjt
	I1025 09:37:32.507304  214786 out.go:252]   - Configuring RBAC rules ...
	I1025 09:37:32.507427  214786 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 09:37:32.517413  214786 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 09:37:32.528013  214786 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 09:37:32.533673  214786 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 09:37:32.540267  214786 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 09:37:32.547339  214786 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 09:37:32.833641  214786 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 09:37:33.288624  214786 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1025 09:37:33.842105  214786 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1025 09:37:33.846498  214786 kubeadm.go:318] 
	I1025 09:37:33.846682  214786 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1025 09:37:33.846691  214786 kubeadm.go:318] 
	I1025 09:37:33.846779  214786 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1025 09:37:33.846785  214786 kubeadm.go:318] 
	I1025 09:37:33.846811  214786 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1025 09:37:33.846926  214786 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 09:37:33.846980  214786 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 09:37:33.846985  214786 kubeadm.go:318] 
	I1025 09:37:33.847040  214786 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1025 09:37:33.847044  214786 kubeadm.go:318] 
	I1025 09:37:33.847095  214786 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 09:37:33.847099  214786 kubeadm.go:318] 
	I1025 09:37:33.847153  214786 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1025 09:37:33.847230  214786 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 09:37:33.847301  214786 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 09:37:33.847305  214786 kubeadm.go:318] 
	I1025 09:37:33.847392  214786 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 09:37:33.847471  214786 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1025 09:37:33.847476  214786 kubeadm.go:318] 
	I1025 09:37:33.847562  214786 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token v8cdem.gfd8enqpbrf2mgjt \
	I1025 09:37:33.847669  214786 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:e54eda66e4b00d0a990c315bf332782d8922fe6750f8b2bc2791b23f9457095b \
	I1025 09:37:33.847694  214786 kubeadm.go:318] 	--control-plane 
	I1025 09:37:33.847698  214786 kubeadm.go:318] 
	I1025 09:37:33.847786  214786 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1025 09:37:33.847790  214786 kubeadm.go:318] 
	I1025 09:37:33.847876  214786 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token v8cdem.gfd8enqpbrf2mgjt \
	I1025 09:37:33.848008  214786 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:e54eda66e4b00d0a990c315bf332782d8922fe6750f8b2bc2791b23f9457095b 
	I1025 09:37:33.860121  214786 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1025 09:37:33.860344  214786 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1025 09:37:33.860447  214786 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 09:37:33.860462  214786 cni.go:84] Creating CNI manager for ""
	I1025 09:37:33.860469  214786 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:37:33.864168  214786 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1025 09:37:31.548144  216293 pod_ready.go:104] pod "coredns-66bc5c9577-dzmkq" is not "Ready", error: <nil>
	W1025 09:37:33.562306  216293 pod_ready.go:104] pod "coredns-66bc5c9577-dzmkq" is not "Ready", error: <nil>
	I1025 09:37:33.867296  214786 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 09:37:33.878468  214786 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1025 09:37:33.878486  214786 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1025 09:37:33.917722  214786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 09:37:34.397766  214786 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 09:37:34.397900  214786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:37:34.397994  214786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-068349 minikube.k8s.io/updated_at=2025_10_25T09_37_34_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c3de5ce1075e776fa55a26fe8396669cc53a4373 minikube.k8s.io/name=auto-068349 minikube.k8s.io/primary=true
	I1025 09:37:34.834936  214786 ops.go:34] apiserver oom_adj: -16
	I1025 09:37:34.835046  214786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:37:35.335825  214786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:37:35.835609  214786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:37:36.335143  214786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:37:36.835359  214786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:37:37.335703  214786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:37:37.835152  214786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:37:38.335512  214786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:37:38.473931  214786 kubeadm.go:1113] duration metric: took 4.076071803s to wait for elevateKubeSystemPrivileges
	I1025 09:37:38.473973  214786 kubeadm.go:402] duration metric: took 28.759573856s to StartCluster
	I1025 09:37:38.474011  214786 settings.go:142] acquiring lock: {Name:mk0d8a31751f5e359928da7d7271367bd4b397fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:37:38.474111  214786 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21796-2312/kubeconfig
	I1025 09:37:38.475285  214786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/kubeconfig: {Name:mk29749a0edbb941c188588ea6aef25a517ab571 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:37:38.475560  214786 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:37:38.475702  214786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 09:37:38.476023  214786 config.go:182] Loaded profile config "auto-068349": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:37:38.476092  214786 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 09:37:38.476170  214786 addons.go:69] Setting storage-provisioner=true in profile "auto-068349"
	I1025 09:37:38.476183  214786 addons.go:238] Setting addon storage-provisioner=true in "auto-068349"
	I1025 09:37:38.476222  214786 host.go:66] Checking if "auto-068349" exists ...
	I1025 09:37:38.477614  214786 cli_runner.go:164] Run: docker container inspect auto-068349 --format={{.State.Status}}
	I1025 09:37:38.477801  214786 addons.go:69] Setting default-storageclass=true in profile "auto-068349"
	I1025 09:37:38.477862  214786 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-068349"
	I1025 09:37:38.478378  214786 cli_runner.go:164] Run: docker container inspect auto-068349 --format={{.State.Status}}
	I1025 09:37:38.480453  214786 out.go:179] * Verifying Kubernetes components...
	I1025 09:37:38.486298  214786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:37:38.524301  214786 addons.go:238] Setting addon default-storageclass=true in "auto-068349"
	I1025 09:37:38.524351  214786 host.go:66] Checking if "auto-068349" exists ...
	I1025 09:37:38.524878  214786 cli_runner.go:164] Run: docker container inspect auto-068349 --format={{.State.Status}}
	I1025 09:37:38.551887  214786 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 09:37:38.556857  214786 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:37:38.556880  214786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 09:37:38.556972  214786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-068349
	I1025 09:37:38.573486  214786 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 09:37:38.573507  214786 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 09:37:38.573576  214786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-068349
	I1025 09:37:38.586416  214786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/auto-068349/id_rsa Username:docker}
	I1025 09:37:38.624836  214786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/auto-068349/id_rsa Username:docker}
	I1025 09:37:39.231920  214786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 09:37:39.246329  214786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:37:39.262489  214786 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:37:39.262926  214786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 09:37:40.470934  214786 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.224516428s)
	I1025 09:37:40.471231  214786 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.20856242s)
	I1025 09:37:40.471395  214786 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.208413471s)
	I1025 09:37:40.471419  214786 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1025 09:37:40.472304  214786 node_ready.go:35] waiting up to 15m0s for node "auto-068349" to be "Ready" ...
	I1025 09:37:40.475266  214786 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	W1025 09:37:36.045492  216293 pod_ready.go:104] pod "coredns-66bc5c9577-dzmkq" is not "Ready", error: <nil>
	W1025 09:37:38.048242  216293 pod_ready.go:104] pod "coredns-66bc5c9577-dzmkq" is not "Ready", error: <nil>
	W1025 09:37:40.545236  216293 pod_ready.go:104] pod "coredns-66bc5c9577-dzmkq" is not "Ready", error: <nil>
	I1025 09:37:40.479338  214786 addons.go:514] duration metric: took 2.003242983s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1025 09:37:40.976821  214786 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-068349" context rescaled to 1 replicas
	W1025 09:37:42.545876  216293 pod_ready.go:104] pod "coredns-66bc5c9577-dzmkq" is not "Ready", error: <nil>
	W1025 09:37:45.060094  216293 pod_ready.go:104] pod "coredns-66bc5c9577-dzmkq" is not "Ready", error: <nil>
	W1025 09:37:42.475416  214786 node_ready.go:57] node "auto-068349" has "Ready":"False" status (will retry)
	W1025 09:37:44.475720  214786 node_ready.go:57] node "auto-068349" has "Ready":"False" status (will retry)
	W1025 09:37:47.549605  216293 pod_ready.go:104] pod "coredns-66bc5c9577-dzmkq" is not "Ready", error: <nil>
	W1025 09:37:50.050862  216293 pod_ready.go:104] pod "coredns-66bc5c9577-dzmkq" is not "Ready", error: <nil>
	W1025 09:37:46.975791  214786 node_ready.go:57] node "auto-068349" has "Ready":"False" status (will retry)
	W1025 09:37:49.477387  214786 node_ready.go:57] node "auto-068349" has "Ready":"False" status (will retry)
	W1025 09:37:52.545927  216293 pod_ready.go:104] pod "coredns-66bc5c9577-dzmkq" is not "Ready", error: <nil>
	W1025 09:37:54.546731  216293 pod_ready.go:104] pod "coredns-66bc5c9577-dzmkq" is not "Ready", error: <nil>
	W1025 09:37:51.976660  214786 node_ready.go:57] node "auto-068349" has "Ready":"False" status (will retry)
	W1025 09:37:54.475719  214786 node_ready.go:57] node "auto-068349" has "Ready":"False" status (will retry)
	W1025 09:37:57.046246  216293 pod_ready.go:104] pod "coredns-66bc5c9577-dzmkq" is not "Ready", error: <nil>
	I1025 09:37:59.044952  216293 pod_ready.go:94] pod "coredns-66bc5c9577-dzmkq" is "Ready"
	I1025 09:37:59.044983  216293 pod_ready.go:86] duration metric: took 34.005343863s for pod "coredns-66bc5c9577-dzmkq" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:37:59.048184  216293 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-666079" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:37:59.052576  216293 pod_ready.go:94] pod "etcd-default-k8s-diff-port-666079" is "Ready"
	I1025 09:37:59.052604  216293 pod_ready.go:86] duration metric: took 4.395273ms for pod "etcd-default-k8s-diff-port-666079" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:37:59.054930  216293 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-666079" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:37:59.059303  216293 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-666079" is "Ready"
	I1025 09:37:59.059334  216293 pod_ready.go:86] duration metric: took 4.378887ms for pod "kube-apiserver-default-k8s-diff-port-666079" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:37:59.061643  216293 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-666079" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:37:59.242819  216293 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-666079" is "Ready"
	I1025 09:37:59.242850  216293 pod_ready.go:86] duration metric: took 181.182508ms for pod "kube-controller-manager-default-k8s-diff-port-666079" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:37:59.442983  216293 pod_ready.go:83] waiting for pod "kube-proxy-65j7p" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:37:59.843100  216293 pod_ready.go:94] pod "kube-proxy-65j7p" is "Ready"
	I1025 09:37:59.843127  216293 pod_ready.go:86] duration metric: took 400.116153ms for pod "kube-proxy-65j7p" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:38:00.057426  216293 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-666079" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:38:00.443739  216293 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-666079" is "Ready"
	I1025 09:38:00.443766  216293 pod_ready.go:86] duration metric: took 386.312856ms for pod "kube-scheduler-default-k8s-diff-port-666079" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:38:00.443780  216293 pod_ready.go:40] duration metric: took 35.449801908s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:38:00.519304  216293 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1025 09:38:00.522677  216293 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-666079" cluster and "default" namespace by default
	W1025 09:37:56.976229  214786 node_ready.go:57] node "auto-068349" has "Ready":"False" status (will retry)
	W1025 09:37:59.475470  214786 node_ready.go:57] node "auto-068349" has "Ready":"False" status (will retry)
	W1025 09:38:01.976079  214786 node_ready.go:57] node "auto-068349" has "Ready":"False" status (will retry)
	W1025 09:38:04.474824  214786 node_ready.go:57] node "auto-068349" has "Ready":"False" status (will retry)
	W1025 09:38:06.475756  214786 node_ready.go:57] node "auto-068349" has "Ready":"False" status (will retry)
	W1025 09:38:08.975352  214786 node_ready.go:57] node "auto-068349" has "Ready":"False" status (will retry)
	W1025 09:38:10.975524  214786 node_ready.go:57] node "auto-068349" has "Ready":"False" status (will retry)
	W1025 09:38:13.475206  214786 node_ready.go:57] node "auto-068349" has "Ready":"False" status (will retry)
	W1025 09:38:15.975367  214786 node_ready.go:57] node "auto-068349" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 25 09:37:52 default-k8s-diff-port-666079 crio[655]: time="2025-10-25T09:37:52.202561278Z" level=info msg="Removed container 5aaa7986e6f9d7b8cf1311e668b322abf9c9f26c1f9f24fefe78e4d2da758fb4: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8cm6/dashboard-metrics-scraper" id=7dfb810c-b83c-4675-ac50-4a7176939ddd name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 09:37:55 default-k8s-diff-port-666079 conmon[1194]: conmon 6c54fce55676c84d4384 <ninfo>: container 1197 exited with status 1
	Oct 25 09:37:55 default-k8s-diff-port-666079 crio[655]: time="2025-10-25T09:37:55.198081874Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=37d9c1fa-b55a-4ab5-b23c-30835d492f86 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:37:55 default-k8s-diff-port-666079 crio[655]: time="2025-10-25T09:37:55.199179461Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=e809007c-09f2-4071-a85e-6b218d38e09a name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:37:55 default-k8s-diff-port-666079 crio[655]: time="2025-10-25T09:37:55.200356008Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=765f365a-e7f6-4ce6-a867-7c25d177ea31 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:37:55 default-k8s-diff-port-666079 crio[655]: time="2025-10-25T09:37:55.200479119Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:37:55 default-k8s-diff-port-666079 crio[655]: time="2025-10-25T09:37:55.206833979Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:37:55 default-k8s-diff-port-666079 crio[655]: time="2025-10-25T09:37:55.207062593Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/9f9beb9dac1b11cf97a47088db5dd6555f62a179bef274e25b055acd8744bef7/merged/etc/passwd: no such file or directory"
	Oct 25 09:37:55 default-k8s-diff-port-666079 crio[655]: time="2025-10-25T09:37:55.207088874Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/9f9beb9dac1b11cf97a47088db5dd6555f62a179bef274e25b055acd8744bef7/merged/etc/group: no such file or directory"
	Oct 25 09:37:55 default-k8s-diff-port-666079 crio[655]: time="2025-10-25T09:37:55.207432254Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:37:55 default-k8s-diff-port-666079 crio[655]: time="2025-10-25T09:37:55.225628378Z" level=info msg="Created container fcd374f1688b8ed4e7571ec994e121446bd298b6684eee33d3b3ab788c09fd2a: kube-system/storage-provisioner/storage-provisioner" id=765f365a-e7f6-4ce6-a867-7c25d177ea31 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:37:55 default-k8s-diff-port-666079 crio[655]: time="2025-10-25T09:37:55.227531578Z" level=info msg="Starting container: fcd374f1688b8ed4e7571ec994e121446bd298b6684eee33d3b3ab788c09fd2a" id=d9a12f8d-8309-4b8b-b72c-e91012f3de0e name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:37:55 default-k8s-diff-port-666079 crio[655]: time="2025-10-25T09:37:55.229272348Z" level=info msg="Started container" PID=1647 containerID=fcd374f1688b8ed4e7571ec994e121446bd298b6684eee33d3b3ab788c09fd2a description=kube-system/storage-provisioner/storage-provisioner id=d9a12f8d-8309-4b8b-b72c-e91012f3de0e name=/runtime.v1.RuntimeService/StartContainer sandboxID=dec8a1dfb7e4d319fa2dbe078630b07ecf1c6e812e0797d7c107bc8fa3dc4e66
	Oct 25 09:38:04 default-k8s-diff-port-666079 crio[655]: time="2025-10-25T09:38:04.154440761Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 09:38:04 default-k8s-diff-port-666079 crio[655]: time="2025-10-25T09:38:04.159604519Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 09:38:04 default-k8s-diff-port-666079 crio[655]: time="2025-10-25T09:38:04.159642123Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 09:38:04 default-k8s-diff-port-666079 crio[655]: time="2025-10-25T09:38:04.159675092Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 09:38:04 default-k8s-diff-port-666079 crio[655]: time="2025-10-25T09:38:04.164899289Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 09:38:04 default-k8s-diff-port-666079 crio[655]: time="2025-10-25T09:38:04.164932684Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 09:38:04 default-k8s-diff-port-666079 crio[655]: time="2025-10-25T09:38:04.164954436Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 09:38:04 default-k8s-diff-port-666079 crio[655]: time="2025-10-25T09:38:04.168254838Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 09:38:04 default-k8s-diff-port-666079 crio[655]: time="2025-10-25T09:38:04.168290301Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 09:38:04 default-k8s-diff-port-666079 crio[655]: time="2025-10-25T09:38:04.16831376Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 09:38:04 default-k8s-diff-port-666079 crio[655]: time="2025-10-25T09:38:04.171532183Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 09:38:04 default-k8s-diff-port-666079 crio[655]: time="2025-10-25T09:38:04.171588446Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	fcd374f1688b8       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           22 seconds ago       Running             storage-provisioner         2                   dec8a1dfb7e4d       storage-provisioner                                    kube-system
	7ba993183832e       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           26 seconds ago       Exited              dashboard-metrics-scraper   2                   9e2d2ef412434       dashboard-metrics-scraper-6ffb444bf9-n8cm6             kubernetes-dashboard
	8fbb9eadefbd8       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   38 seconds ago       Running             kubernetes-dashboard        0                   6d9aa50a0365d       kubernetes-dashboard-855c9754f9-v6j8w                  kubernetes-dashboard
	6c54fce55676c       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           54 seconds ago       Exited              storage-provisioner         1                   dec8a1dfb7e4d       storage-provisioner                                    kube-system
	d2e32fa53d02a       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           54 seconds ago       Running             kube-proxy                  1                   06e23fc1ccd57       kube-proxy-65j7p                                       kube-system
	e166325c1923d       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           54 seconds ago       Running             coredns                     1                   dd681068014df       coredns-66bc5c9577-dzmkq                               kube-system
	bcf200eeb50f5       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           54 seconds ago       Running             kindnet-cni                 1                   7e1c0a2f8ff65       kindnet-28vnv                                          kube-system
	19f2c73a067d0       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           55 seconds ago       Running             busybox                     1                   77286359144c8       busybox                                                default
	c26bf38fd7e4b       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   ac73eebc77c0f       kube-apiserver-default-k8s-diff-port-666079            kube-system
	93c1d103bf05e       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   d6c624662f01d       etcd-default-k8s-diff-port-666079                      kube-system
	fe95bac5f1e76       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   97c04f20a1252       kube-controller-manager-default-k8s-diff-port-666079   kube-system
	36dbd5d0fba8f       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   77bfab8d2e645       kube-scheduler-default-k8s-diff-port-666079            kube-system
	
	
	==> coredns [e166325c1923d08d8647a1a3c29bf323317468e389b1f4993ca6afafc167012d] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48613 - 30511 "HINFO IN 3318603629285010333.775493663660593314. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.016979054s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-666079
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-666079
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3de5ce1075e776fa55a26fe8396669cc53a4373
	                    minikube.k8s.io/name=default-k8s-diff-port-666079
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_35_52_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:35:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-666079
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:38:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:38:02 +0000   Sat, 25 Oct 2025 09:35:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:38:02 +0000   Sat, 25 Oct 2025 09:35:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:38:02 +0000   Sat, 25 Oct 2025 09:35:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:38:02 +0000   Sat, 25 Oct 2025 09:36:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-666079
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                492daa44-3080-463c-abfd-050b629beadb
	  Boot ID:                    89aa6167-e700-462e-8969-7739a9f33dc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 coredns-66bc5c9577-dzmkq                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m22s
	  kube-system                 etcd-default-k8s-diff-port-666079                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m27s
	  kube-system                 kindnet-28vnv                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m22s
	  kube-system                 kube-apiserver-default-k8s-diff-port-666079             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-666079    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 kube-proxy-65j7p                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-scheduler-default-k8s-diff-port-666079             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-n8cm6              0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-v6j8w                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m20s                  kube-proxy       
	  Normal   Starting                 52s                    kube-proxy       
	  Warning  CgroupV1                 2m38s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m37s (x8 over 2m38s)  kubelet          Node default-k8s-diff-port-666079 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m37s (x8 over 2m38s)  kubelet          Node default-k8s-diff-port-666079 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m37s (x8 over 2m38s)  kubelet          Node default-k8s-diff-port-666079 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m28s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m28s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m27s                  kubelet          Node default-k8s-diff-port-666079 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m27s                  kubelet          Node default-k8s-diff-port-666079 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m27s                  kubelet          Node default-k8s-diff-port-666079 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m23s                  node-controller  Node default-k8s-diff-port-666079 event: Registered Node default-k8s-diff-port-666079 in Controller
	  Normal   NodeReady                100s                   kubelet          Node default-k8s-diff-port-666079 status is now: NodeReady
	  Normal   Starting                 65s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 65s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  65s (x8 over 65s)      kubelet          Node default-k8s-diff-port-666079 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    65s (x8 over 65s)      kubelet          Node default-k8s-diff-port-666079 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     65s (x8 over 65s)      kubelet          Node default-k8s-diff-port-666079 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           52s                    node-controller  Node default-k8s-diff-port-666079 event: Registered Node default-k8s-diff-port-666079 in Controller
	
	
	==> dmesg <==
	[Oct25 09:15] overlayfs: idmapped layers are currently not supported
	[ +40.253798] overlayfs: idmapped layers are currently not supported
	[Oct25 09:16] overlayfs: idmapped layers are currently not supported
	[Oct25 09:17] overlayfs: idmapped layers are currently not supported
	[Oct25 09:18] overlayfs: idmapped layers are currently not supported
	[  +9.191019] hrtimer: interrupt took 13462946 ns
	[Oct25 09:20] overlayfs: idmapped layers are currently not supported
	[Oct25 09:22] overlayfs: idmapped layers are currently not supported
	[Oct25 09:23] overlayfs: idmapped layers are currently not supported
	[Oct25 09:24] overlayfs: idmapped layers are currently not supported
	[Oct25 09:28] overlayfs: idmapped layers are currently not supported
	[ +37.283444] overlayfs: idmapped layers are currently not supported
	[Oct25 09:29] overlayfs: idmapped layers are currently not supported
	[ +38.328802] overlayfs: idmapped layers are currently not supported
	[Oct25 09:30] overlayfs: idmapped layers are currently not supported
	[Oct25 09:31] overlayfs: idmapped layers are currently not supported
	[Oct25 09:32] overlayfs: idmapped layers are currently not supported
	[Oct25 09:33] overlayfs: idmapped layers are currently not supported
	[Oct25 09:34] overlayfs: idmapped layers are currently not supported
	[ +28.289706] overlayfs: idmapped layers are currently not supported
	[Oct25 09:35] overlayfs: idmapped layers are currently not supported
	[Oct25 09:36] overlayfs: idmapped layers are currently not supported
	[ +24.160248] overlayfs: idmapped layers are currently not supported
	[Oct25 09:37] overlayfs: idmapped layers are currently not supported
	[  +8.216028] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [93c1d103bf05eb8996db42684ab453c3e8a59e4287467d1fb344225e54155651] <==
	{"level":"warn","ts":"2025-10-25T09:37:19.688431Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:37:19.719061Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:37:19.757463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:37:19.774155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:37:19.810854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:37:19.829355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:37:19.844944Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:37:19.872568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:37:19.898108Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:37:19.926973Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:37:19.958450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:37:19.978080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:37:20.017889Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:37:20.036284Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:37:20.059042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:37:20.101886Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:37:20.140049Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:37:20.167802Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:37:20.191613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:37:20.214016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:37:20.252171Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:37:20.267863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:37:20.321768Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:37:20.333125Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:37:20.430871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35506","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:38:18 up  1:20,  0 user,  load average: 4.04, 4.11, 3.24
	Linux default-k8s-diff-port-666079 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [bcf200eeb50f5e2d26ad7b92d990c6b3d8d58108b4336e8005c6dfaaaa9cbc6b] <==
	I1025 09:37:23.925487       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 09:37:23.925749       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1025 09:37:23.925858       1 main.go:148] setting mtu 1500 for CNI 
	I1025 09:37:23.925869       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 09:37:23.925882       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T09:37:24Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 09:37:24.154652       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 09:37:24.154675       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 09:37:24.154685       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 09:37:24.154971       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1025 09:37:54.154408       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1025 09:37:54.155358       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1025 09:37:54.155361       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1025 09:37:54.155475       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1025 09:37:55.455026       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 09:37:55.455135       1 metrics.go:72] Registering metrics
	I1025 09:37:55.455231       1 controller.go:711] "Syncing nftables rules"
	I1025 09:38:04.154104       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1025 09:38:04.154169       1 main.go:301] handling current node
	I1025 09:38:14.162929       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1025 09:38:14.162961       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c26bf38fd7e4b9f51947f954e4ee102888ffd02a198adb203972580c4eb3c74d] <==
	I1025 09:37:21.802590       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1025 09:37:21.813928       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1025 09:37:21.862647       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1025 09:37:21.878555       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1025 09:37:21.888307       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1025 09:37:21.888332       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1025 09:37:21.888433       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1025 09:37:21.888664       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1025 09:37:21.896241       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:37:21.897327       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1025 09:37:21.897343       1 policy_source.go:240] refreshing policies
	I1025 09:37:21.910055       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1025 09:37:21.942220       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 09:37:22.009302       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	E1025 09:37:22.017937       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1025 09:37:22.498687       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 09:37:23.803475       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 09:37:24.177737       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 09:37:24.472468       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 09:37:24.531400       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 09:37:24.725916       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.113.205"}
	I1025 09:37:24.778838       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.27.36"}
	I1025 09:37:26.099157       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 09:37:26.479049       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 09:37:26.611444       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [fe95bac5f1e76131716e125587dd727d7db7bdabeed57b1078cc75158bc0da09] <==
	I1025 09:37:26.059985       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1025 09:37:26.069082       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1025 09:37:26.071532       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1025 09:37:26.071806       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1025 09:37:26.071856       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1025 09:37:26.077676       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1025 09:37:26.077860       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1025 09:37:26.077938       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1025 09:37:26.078009       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1025 09:37:26.078045       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1025 09:37:26.078085       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1025 09:37:26.078804       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1025 09:37:26.081585       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:37:26.085084       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1025 09:37:26.089316       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 09:37:26.091936       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1025 09:37:26.092052       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:37:26.094060       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1025 09:37:26.098506       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1025 09:37:26.107279       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1025 09:37:26.108654       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1025 09:37:26.131557       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1025 09:37:26.139884       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:37:26.139970       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 09:37:26.140001       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [d2e32fa53d02a52e39f9a3c61406c3eba615d2628a65822c3a98cee9707208b7] <==
	I1025 09:37:25.267273       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:37:25.611525       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:37:25.915484       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:37:25.915524       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1025 09:37:25.915592       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:37:26.138276       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:37:26.138401       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:37:26.200382       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:37:26.200767       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:37:26.200829       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:37:26.201937       1 config.go:200] "Starting service config controller"
	I1025 09:37:26.201964       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:37:26.204167       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:37:26.204209       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:37:26.204268       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:37:26.204294       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:37:26.204963       1 config.go:309] "Starting node config controller"
	I1025 09:37:26.207445       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:37:26.207514       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:37:26.302391       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 09:37:26.304599       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 09:37:26.304624       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [36dbd5d0fba8fd463698b1cfb95820a97032c9e08ee3218bc4e23d5db821fa62] <==
	I1025 09:37:19.066157       1 serving.go:386] Generated self-signed cert in-memory
	I1025 09:37:25.609649       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 09:37:25.609737       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:37:25.627595       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1025 09:37:25.627709       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1025 09:37:25.627830       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:37:25.627924       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:37:25.627983       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 09:37:25.628025       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 09:37:25.629298       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 09:37:25.629387       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 09:37:25.740055       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1025 09:37:25.740180       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:37:25.740927       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 25 09:37:26 default-k8s-diff-port-666079 kubelet[783]: I1025 09:37:26.735748     783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8d8e464a-fad4-4966-91eb-5d8b916d9ed7-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-v6j8w\" (UID: \"8d8e464a-fad4-4966-91eb-5d8b916d9ed7\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-v6j8w"
	Oct 25 09:37:26 default-k8s-diff-port-666079 kubelet[783]: I1025 09:37:26.735771     783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4v57\" (UniqueName: \"kubernetes.io/projected/8d8e464a-fad4-4966-91eb-5d8b916d9ed7-kube-api-access-f4v57\") pod \"kubernetes-dashboard-855c9754f9-v6j8w\" (UID: \"8d8e464a-fad4-4966-91eb-5d8b916d9ed7\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-v6j8w"
	Oct 25 09:37:26 default-k8s-diff-port-666079 kubelet[783]: I1025 09:37:26.735793     783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/280832ce-76f8-440d-a575-b77df2e00e0a-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-n8cm6\" (UID: \"280832ce-76f8-440d-a575-b77df2e00e0a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8cm6"
	Oct 25 09:37:26 default-k8s-diff-port-666079 kubelet[783]: W1025 09:37:26.970577     783 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/957d2a4135a8dcb0784416c8d5211d69b209106d0d8dfa7985ab81c8fdcf8862/crio-9e2d2ef41243482de269af45b541cbce6b1e309bc4ccff5479fa2c51a01110e4 WatchSource:0}: Error finding container 9e2d2ef41243482de269af45b541cbce6b1e309bc4ccff5479fa2c51a01110e4: Status 404 returned error can't find the container with id 9e2d2ef41243482de269af45b541cbce6b1e309bc4ccff5479fa2c51a01110e4
	Oct 25 09:37:26 default-k8s-diff-port-666079 kubelet[783]: W1025 09:37:26.997639     783 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/957d2a4135a8dcb0784416c8d5211d69b209106d0d8dfa7985ab81c8fdcf8862/crio-6d9aa50a0365d84227fe63cb67f5eb2df0a85e2a1477646ed549bfb7a8ac5d5b WatchSource:0}: Error finding container 6d9aa50a0365d84227fe63cb67f5eb2df0a85e2a1477646ed549bfb7a8ac5d5b: Status 404 returned error can't find the container with id 6d9aa50a0365d84227fe63cb67f5eb2df0a85e2a1477646ed549bfb7a8ac5d5b
	Oct 25 09:37:28 default-k8s-diff-port-666079 kubelet[783]: I1025 09:37:28.848308     783 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 25 09:37:34 default-k8s-diff-port-666079 kubelet[783]: I1025 09:37:34.111634     783 scope.go:117] "RemoveContainer" containerID="8d2938558d3d5a4dccdaebd7a18351c5b20a9c6225f1279ba3423be4339911c9"
	Oct 25 09:37:35 default-k8s-diff-port-666079 kubelet[783]: I1025 09:37:35.117008     783 scope.go:117] "RemoveContainer" containerID="8d2938558d3d5a4dccdaebd7a18351c5b20a9c6225f1279ba3423be4339911c9"
	Oct 25 09:37:35 default-k8s-diff-port-666079 kubelet[783]: I1025 09:37:35.117298     783 scope.go:117] "RemoveContainer" containerID="5aaa7986e6f9d7b8cf1311e668b322abf9c9f26c1f9f24fefe78e4d2da758fb4"
	Oct 25 09:37:35 default-k8s-diff-port-666079 kubelet[783]: E1025 09:37:35.117440     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-n8cm6_kubernetes-dashboard(280832ce-76f8-440d-a575-b77df2e00e0a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8cm6" podUID="280832ce-76f8-440d-a575-b77df2e00e0a"
	Oct 25 09:37:36 default-k8s-diff-port-666079 kubelet[783]: I1025 09:37:36.909039     783 scope.go:117] "RemoveContainer" containerID="5aaa7986e6f9d7b8cf1311e668b322abf9c9f26c1f9f24fefe78e4d2da758fb4"
	Oct 25 09:37:36 default-k8s-diff-port-666079 kubelet[783]: E1025 09:37:36.909233     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-n8cm6_kubernetes-dashboard(280832ce-76f8-440d-a575-b77df2e00e0a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8cm6" podUID="280832ce-76f8-440d-a575-b77df2e00e0a"
	Oct 25 09:37:51 default-k8s-diff-port-666079 kubelet[783]: I1025 09:37:51.792413     783 scope.go:117] "RemoveContainer" containerID="5aaa7986e6f9d7b8cf1311e668b322abf9c9f26c1f9f24fefe78e4d2da758fb4"
	Oct 25 09:37:52 default-k8s-diff-port-666079 kubelet[783]: I1025 09:37:52.187359     783 scope.go:117] "RemoveContainer" containerID="5aaa7986e6f9d7b8cf1311e668b322abf9c9f26c1f9f24fefe78e4d2da758fb4"
	Oct 25 09:37:52 default-k8s-diff-port-666079 kubelet[783]: I1025 09:37:52.187668     783 scope.go:117] "RemoveContainer" containerID="7ba993183832edaaf183af2a7b8cff0dbc3f87072503fc42186cda8f2ee1e23c"
	Oct 25 09:37:52 default-k8s-diff-port-666079 kubelet[783]: E1025 09:37:52.187842     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-n8cm6_kubernetes-dashboard(280832ce-76f8-440d-a575-b77df2e00e0a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8cm6" podUID="280832ce-76f8-440d-a575-b77df2e00e0a"
	Oct 25 09:37:52 default-k8s-diff-port-666079 kubelet[783]: I1025 09:37:52.211693     783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-v6j8w" podStartSLOduration=13.900403321 podStartE2EDuration="26.211670393s" podCreationTimestamp="2025-10-25 09:37:26 +0000 UTC" firstStartedPulling="2025-10-25 09:37:27.008219206 +0000 UTC m=+13.533618711" lastFinishedPulling="2025-10-25 09:37:39.319486278 +0000 UTC m=+25.844885783" observedRunningTime="2025-10-25 09:37:40.215444037 +0000 UTC m=+26.740843550" watchObservedRunningTime="2025-10-25 09:37:52.211670393 +0000 UTC m=+38.737069906"
	Oct 25 09:37:55 default-k8s-diff-port-666079 kubelet[783]: I1025 09:37:55.197109     783 scope.go:117] "RemoveContainer" containerID="6c54fce55676c84d4384dd7ac96ecf2530d5a363686e91690dc3545792bcc0b6"
	Oct 25 09:37:56 default-k8s-diff-port-666079 kubelet[783]: I1025 09:37:56.909427     783 scope.go:117] "RemoveContainer" containerID="7ba993183832edaaf183af2a7b8cff0dbc3f87072503fc42186cda8f2ee1e23c"
	Oct 25 09:37:56 default-k8s-diff-port-666079 kubelet[783]: E1025 09:37:56.910126     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-n8cm6_kubernetes-dashboard(280832ce-76f8-440d-a575-b77df2e00e0a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8cm6" podUID="280832ce-76f8-440d-a575-b77df2e00e0a"
	Oct 25 09:38:07 default-k8s-diff-port-666079 kubelet[783]: I1025 09:38:07.792344     783 scope.go:117] "RemoveContainer" containerID="7ba993183832edaaf183af2a7b8cff0dbc3f87072503fc42186cda8f2ee1e23c"
	Oct 25 09:38:07 default-k8s-diff-port-666079 kubelet[783]: E1025 09:38:07.793398     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-n8cm6_kubernetes-dashboard(280832ce-76f8-440d-a575-b77df2e00e0a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8cm6" podUID="280832ce-76f8-440d-a575-b77df2e00e0a"
	Oct 25 09:38:12 default-k8s-diff-port-666079 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 09:38:12 default-k8s-diff-port-666079 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 09:38:12 default-k8s-diff-port-666079 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [8fbb9eadefbd80b899692ec9dd8c86fba760ca25136cdb11e58fcf1c5b382d3f] <==
	2025/10/25 09:37:39 Using namespace: kubernetes-dashboard
	2025/10/25 09:37:39 Using in-cluster config to connect to apiserver
	2025/10/25 09:37:39 Using secret token for csrf signing
	2025/10/25 09:37:39 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/25 09:37:39 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/25 09:37:39 Successful initial request to the apiserver, version: v1.34.1
	2025/10/25 09:37:39 Generating JWE encryption key
	2025/10/25 09:37:39 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/25 09:37:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/25 09:37:39 Initializing JWE encryption key from synchronized object
	2025/10/25 09:37:39 Creating in-cluster Sidecar client
	2025/10/25 09:37:39 Serving insecurely on HTTP port: 9090
	2025/10/25 09:37:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 09:38:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 09:37:39 Starting overwatch
	
	
	==> storage-provisioner [6c54fce55676c84d4384dd7ac96ecf2530d5a363686e91690dc3545792bcc0b6] <==
	I1025 09:37:25.079186       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1025 09:37:55.105182       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [fcd374f1688b8ed4e7571ec994e121446bd298b6684eee33d3b3ab788c09fd2a] <==
	I1025 09:37:55.247761       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 09:37:55.259852       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 09:37:55.259983       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1025 09:37:55.262304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:37:58.717522       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:38:02.978430       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:38:06.577099       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:38:09.630876       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:38:12.655245       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:38:12.664491       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 09:38:12.664731       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 09:38:12.667371       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-666079_ac0899b9-7a72-4472-90fc-3a6456555790!
	I1025 09:38:12.669098       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a28afa3c-22cb-41cd-9bf1-a7e2b455d9f3", APIVersion:"v1", ResourceVersion:"697", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-666079_ac0899b9-7a72-4472-90fc-3a6456555790 became leader
	W1025 09:38:12.675943       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:38:12.687741       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 09:38:12.767827       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-666079_ac0899b9-7a72-4472-90fc-3a6456555790!
	W1025 09:38:14.691374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:38:14.700968       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:38:16.704086       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:38:16.709239       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-666079 -n default-k8s-diff-port-666079
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-666079 -n default-k8s-diff-port-666079: exit status 2 (370.255593ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-666079 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (6.98s)

                                                
                                    

Test pass (260/326)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.63
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 4.06
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.1
18 TestDownloadOnly/v1.34.1/DeleteAll 0.27
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.62
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.1
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.11
27 TestAddons/Setup 181.3
31 TestAddons/serial/GCPAuth/Namespaces 0.23
32 TestAddons/serial/GCPAuth/FakeCredentials 9.81
48 TestAddons/StoppedEnableDisable 12.41
49 TestCertOptions 38.79
50 TestCertExpiration 243.95
52 TestForceSystemdFlag 45.74
53 TestForceSystemdEnv 46.84
58 TestErrorSpam/setup 32.55
59 TestErrorSpam/start 0.8
60 TestErrorSpam/status 1.14
61 TestErrorSpam/pause 5.65
62 TestErrorSpam/unpause 6.03
63 TestErrorSpam/stop 1.52
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 78.55
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 29.11
70 TestFunctional/serial/KubeContext 0.06
71 TestFunctional/serial/KubectlGetPods 0.11
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.48
75 TestFunctional/serial/CacheCmd/cache/add_local 1.07
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.07
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.32
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.83
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.15
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
83 TestFunctional/serial/ExtraConfig 53.26
84 TestFunctional/serial/ComponentHealth 0.1
85 TestFunctional/serial/LogsCmd 1.47
86 TestFunctional/serial/LogsFileCmd 1.46
87 TestFunctional/serial/InvalidService 4.54
89 TestFunctional/parallel/ConfigCmd 0.47
90 TestFunctional/parallel/DashboardCmd 11.43
91 TestFunctional/parallel/DryRun 0.49
92 TestFunctional/parallel/InternationalLanguage 0.27
93 TestFunctional/parallel/StatusCmd 1.25
98 TestFunctional/parallel/AddonsCmd 0.22
99 TestFunctional/parallel/PersistentVolumeClaim 24.64
101 TestFunctional/parallel/SSHCmd 0.71
102 TestFunctional/parallel/CpCmd 2.05
104 TestFunctional/parallel/FileSync 0.33
105 TestFunctional/parallel/CertSync 2.2
109 TestFunctional/parallel/NodeLabels 0.13
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.71
113 TestFunctional/parallel/License 0.34
114 TestFunctional/parallel/Version/short 0.1
115 TestFunctional/parallel/Version/components 0.64
116 TestFunctional/parallel/ImageCommands/ImageListShort 1.87
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
120 TestFunctional/parallel/ImageCommands/ImageBuild 4.47
121 TestFunctional/parallel/ImageCommands/Setup 0.71
122 TestFunctional/parallel/UpdateContextCmd/no_changes 0.19
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.2
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.23
128 TestFunctional/parallel/ProfileCmd/profile_not_create 0.58
129 TestFunctional/parallel/ProfileCmd/profile_list 0.55
130 TestFunctional/parallel/ProfileCmd/profile_json_output 0.55
132 TestFunctional/parallel/ImageCommands/ImageRemove 0.65
135 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.63
137 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
139 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.35
140 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
141 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
145 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
147 TestFunctional/parallel/MountCmd/any-port 6.81
148 TestFunctional/parallel/MountCmd/specific-port 2.07
149 TestFunctional/parallel/MountCmd/VerifyCleanup 2.37
150 TestFunctional/parallel/ServiceCmd/List 0.65
151 TestFunctional/parallel/ServiceCmd/JSONOutput 0.63
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 208.69
163 TestMultiControlPlane/serial/DeployApp 6.81
164 TestMultiControlPlane/serial/PingHostFromPods 1.61
165 TestMultiControlPlane/serial/AddWorkerNode 61.99
166 TestMultiControlPlane/serial/NodeLabels 0.11
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.08
168 TestMultiControlPlane/serial/CopyFile 20.35
169 TestMultiControlPlane/serial/StopSecondaryNode 12.9
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.84
171 TestMultiControlPlane/serial/RestartSecondaryNode 30.36
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.41
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 133.71
174 TestMultiControlPlane/serial/DeleteSecondaryNode 12.03
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.78
176 TestMultiControlPlane/serial/StopCluster 36.26
177 TestMultiControlPlane/serial/RestartCluster 80.33
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.83
179 TestMultiControlPlane/serial/AddSecondaryNode 54.48
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.07
184 TestJSONOutput/start/Command 82.59
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 5.85
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.24
209 TestKicCustomNetwork/create_custom_network 40.45
210 TestKicCustomNetwork/use_default_bridge_network 36.04
211 TestKicExistingNetwork 41.42
212 TestKicCustomSubnet 37.52
213 TestKicStaticIP 38.47
214 TestMainNoArgs 0.05
215 TestMinikubeProfile 74.46
218 TestMountStart/serial/StartWithMountFirst 9.59
219 TestMountStart/serial/VerifyMountFirst 0.28
220 TestMountStart/serial/StartWithMountSecond 9.51
221 TestMountStart/serial/VerifyMountSecond 0.27
222 TestMountStart/serial/DeleteFirst 1.7
223 TestMountStart/serial/VerifyMountPostDelete 0.27
224 TestMountStart/serial/Stop 1.29
225 TestMountStart/serial/RestartStopped 7.64
226 TestMountStart/serial/VerifyMountPostStop 0.27
229 TestMultiNode/serial/FreshStart2Nodes 140.69
230 TestMultiNode/serial/DeployApp2Nodes 5.88
231 TestMultiNode/serial/PingHostFrom2Pods 0.96
232 TestMultiNode/serial/AddNode 59.38
233 TestMultiNode/serial/MultiNodeLabels 0.1
234 TestMultiNode/serial/ProfileList 0.71
235 TestMultiNode/serial/CopyFile 10.57
236 TestMultiNode/serial/StopNode 2.47
237 TestMultiNode/serial/StartAfterStop 8.24
238 TestMultiNode/serial/RestartKeepsNodes 77.27
239 TestMultiNode/serial/DeleteNode 5.63
240 TestMultiNode/serial/StopMultiNode 23.98
241 TestMultiNode/serial/RestartMultiNode 50.93
242 TestMultiNode/serial/ValidateNameConflict 35.76
247 TestPreload 127.12
249 TestScheduledStopUnix 108.17
252 TestInsufficientStorage 13.18
253 TestRunningBinaryUpgrade 54.04
255 TestKubernetesUpgrade 368.16
256 TestMissingContainerUpgrade 114.02
258 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
259 TestNoKubernetes/serial/StartWithK8s 43.05
260 TestNoKubernetes/serial/StartWithStopK8s 38.87
261 TestNoKubernetes/serial/Start 10.65
262 TestNoKubernetes/serial/VerifyK8sNotRunning 0.35
263 TestNoKubernetes/serial/ProfileList 1.25
264 TestNoKubernetes/serial/Stop 1.44
265 TestNoKubernetes/serial/StartNoArgs 9.47
266 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.37
267 TestStoppedBinaryUpgrade/Setup 0.64
268 TestStoppedBinaryUpgrade/Upgrade 66.6
269 TestStoppedBinaryUpgrade/MinikubeLogs 1.21
278 TestPause/serial/Start 84.27
279 TestPause/serial/SecondStartNoReconfiguration 123.09
288 TestNetworkPlugins/group/false 4.75
293 TestStartStop/group/old-k8s-version/serial/FirstStart 58.61
294 TestStartStop/group/old-k8s-version/serial/DeployApp 9.5
296 TestStartStop/group/old-k8s-version/serial/Stop 12.02
297 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
298 TestStartStop/group/old-k8s-version/serial/SecondStart 46.7
299 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
300 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
301 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
304 TestStartStop/group/no-preload/serial/FirstStart 78.91
306 TestStartStop/group/embed-certs/serial/FirstStart 90.98
307 TestStartStop/group/no-preload/serial/DeployApp 9.34
309 TestStartStop/group/no-preload/serial/Stop 12.02
310 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
311 TestStartStop/group/no-preload/serial/SecondStart 49.12
312 TestStartStop/group/embed-certs/serial/DeployApp 8.41
314 TestStartStop/group/embed-certs/serial/Stop 12.14
315 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
316 TestStartStop/group/embed-certs/serial/SecondStart 49.36
317 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
318 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.12
319 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
322 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 84.7
323 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
324 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.13
325 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.32
328 TestStartStop/group/newest-cni/serial/FirstStart 38.78
329 TestStartStop/group/newest-cni/serial/DeployApp 0
331 TestStartStop/group/newest-cni/serial/Stop 1.36
332 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
333 TestStartStop/group/newest-cni/serial/SecondStart 15.83
334 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.45
335 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
336 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
337 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
340 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.18
341 TestNetworkPlugins/group/auto/Start 88.24
342 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.25
343 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 55.42
344 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
345 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
346 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
348 TestNetworkPlugins/group/kindnet/Start 92.03
349 TestNetworkPlugins/group/auto/KubeletFlags 0.39
350 TestNetworkPlugins/group/auto/NetCatPod 11.4
351 TestNetworkPlugins/group/auto/DNS 0.19
352 TestNetworkPlugins/group/auto/Localhost 0.15
353 TestNetworkPlugins/group/auto/HairPin 0.16
354 TestNetworkPlugins/group/calico/Start 66.25
355 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
356 TestNetworkPlugins/group/kindnet/KubeletFlags 0.62
357 TestNetworkPlugins/group/kindnet/NetCatPod 12.39
358 TestNetworkPlugins/group/calico/ControllerPod 6.01
359 TestNetworkPlugins/group/kindnet/DNS 0.17
360 TestNetworkPlugins/group/kindnet/Localhost 0.15
361 TestNetworkPlugins/group/kindnet/HairPin 0.14
362 TestNetworkPlugins/group/calico/KubeletFlags 0.34
363 TestNetworkPlugins/group/calico/NetCatPod 9.27
364 TestNetworkPlugins/group/calico/DNS 0.3
365 TestNetworkPlugins/group/calico/Localhost 0.21
366 TestNetworkPlugins/group/calico/HairPin 0.25
367 TestNetworkPlugins/group/custom-flannel/Start 69.77
368 TestNetworkPlugins/group/enable-default-cni/Start 81.95
369 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
370 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.29
371 TestNetworkPlugins/group/custom-flannel/DNS 0.16
372 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
373 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
374 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.39
375 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.43
376 TestNetworkPlugins/group/flannel/Start 70.94
377 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
378 TestNetworkPlugins/group/enable-default-cni/Localhost 0.21
379 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
380 TestNetworkPlugins/group/bridge/Start 76.07
381 TestNetworkPlugins/group/flannel/ControllerPod 6
382 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
383 TestNetworkPlugins/group/flannel/NetCatPod 11.31
384 TestNetworkPlugins/group/flannel/DNS 0.19
385 TestNetworkPlugins/group/flannel/Localhost 0.13
386 TestNetworkPlugins/group/flannel/HairPin 0.14
387 TestNetworkPlugins/group/bridge/KubeletFlags 0.42
388 TestNetworkPlugins/group/bridge/NetCatPod 11.36
389 TestNetworkPlugins/group/bridge/DNS 0.17
390 TestNetworkPlugins/group/bridge/Localhost 0.12
391 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.28.0/json-events (5.63s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-598496 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-598496 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.626637255s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.63s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1025 08:29:48.583655    4110 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1025 08:29:48.583733    4110 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21796-2312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-598496
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-598496: exit status 85 (74.938402ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-598496 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-598496 │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 08:29:43
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 08:29:43.010543    4115 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:29:43.010691    4115 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:29:43.010726    4115 out.go:374] Setting ErrFile to fd 2...
	I1025 08:29:43.010739    4115 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:29:43.011021    4115 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-2312/.minikube/bin
	W1025 08:29:43.011165    4115 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21796-2312/.minikube/config/config.json: open /home/jenkins/minikube-integration/21796-2312/.minikube/config/config.json: no such file or directory
	I1025 08:29:43.011592    4115 out.go:368] Setting JSON to true
	I1025 08:29:43.012362    4115 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":734,"bootTime":1761380249,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1025 08:29:43.012431    4115 start.go:141] virtualization:  
	I1025 08:29:43.016783    4115 out.go:99] [download-only-598496] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1025 08:29:43.016965    4115 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21796-2312/.minikube/cache/preloaded-tarball: no such file or directory
	I1025 08:29:43.017076    4115 notify.go:220] Checking for updates...
	I1025 08:29:43.021594    4115 out.go:171] MINIKUBE_LOCATION=21796
	I1025 08:29:43.024683    4115 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 08:29:43.027661    4115 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21796-2312/kubeconfig
	I1025 08:29:43.030458    4115 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-2312/.minikube
	I1025 08:29:43.033245    4115 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1025 08:29:43.039022    4115 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1025 08:29:43.039284    4115 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 08:29:43.066564    4115 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 08:29:43.066686    4115 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 08:29:43.467988    4115 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-10-25 08:29:43.458929551 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 08:29:43.468099    4115 docker.go:318] overlay module found
	I1025 08:29:43.471240    4115 out.go:99] Using the docker driver based on user configuration
	I1025 08:29:43.471283    4115 start.go:305] selected driver: docker
	I1025 08:29:43.471294    4115 start.go:925] validating driver "docker" against <nil>
	I1025 08:29:43.471410    4115 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 08:29:43.536905    4115 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-10-25 08:29:43.528052784 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 08:29:43.537063    4115 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 08:29:43.537353    4115 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1025 08:29:43.537549    4115 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1025 08:29:43.540503    4115 out.go:171] Using Docker driver with root privileges
	I1025 08:29:43.543316    4115 cni.go:84] Creating CNI manager for ""
	I1025 08:29:43.543378    4115 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 08:29:43.543390    4115 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 08:29:43.543467    4115 start.go:349] cluster config:
	{Name:download-only-598496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-598496 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 08:29:43.546346    4115 out.go:99] Starting "download-only-598496" primary control-plane node in "download-only-598496" cluster
	I1025 08:29:43.546370    4115 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 08:29:43.549148    4115 out.go:99] Pulling base image v0.0.48-1760939008-21773 ...
	I1025 08:29:43.549174    4115 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1025 08:29:43.549320    4115 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 08:29:43.564842    4115 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1025 08:29:43.565028    4115 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1025 08:29:43.565126    4115 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1025 08:29:43.602250    4115 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1025 08:29:43.602277    4115 cache.go:58] Caching tarball of preloaded images
	I1025 08:29:43.602430    4115 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1025 08:29:43.605753    4115 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1025 08:29:43.605785    4115 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1025 08:29:43.691953    4115 preload.go:290] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1025 08:29:43.692119    4115 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21796-2312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1025 08:29:46.796511    4115 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1025 08:29:46.796918    4115 profile.go:143] Saving config to /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/download-only-598496/config.json ...
	I1025 08:29:46.796963    4115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/download-only-598496/config.json: {Name:mke5cee9f9a91f60297d1e43eeb4ee87a477edcc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 08:29:46.797144    4115 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1025 08:29:46.797368    4115 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21796-2312/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-598496 host does not exist
	  To start a cluster, run: "minikube start -p download-only-598496"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-598496
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (4.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-661693 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-661693 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.055751164s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (4.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1025 08:29:53.072731    4110 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1025 08:29:53.072768    4110 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21796-2312/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-661693
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-661693: exit status 85 (101.742768ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-598496 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-598496 │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │ 25 Oct 25 08:29 UTC │
	│ delete  │ -p download-only-598496                                                                                                                                                   │ download-only-598496 │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │ 25 Oct 25 08:29 UTC │
	│ start   │ -o=json --download-only -p download-only-661693 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-661693 │ jenkins │ v1.37.0 │ 25 Oct 25 08:29 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 08:29:49
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 08:29:49.059955    4310 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:29:49.060123    4310 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:29:49.060153    4310 out.go:374] Setting ErrFile to fd 2...
	I1025 08:29:49.060175    4310 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:29:49.060452    4310 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-2312/.minikube/bin
	I1025 08:29:49.060870    4310 out.go:368] Setting JSON to true
	I1025 08:29:49.061601    4310 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":740,"bootTime":1761380249,"procs":143,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1025 08:29:49.061689    4310 start.go:141] virtualization:  
	I1025 08:29:49.065010    4310 out.go:99] [download-only-661693] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 08:29:49.065224    4310 notify.go:220] Checking for updates...
	I1025 08:29:49.068202    4310 out.go:171] MINIKUBE_LOCATION=21796
	I1025 08:29:49.071220    4310 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 08:29:49.074162    4310 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21796-2312/kubeconfig
	I1025 08:29:49.076940    4310 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-2312/.minikube
	I1025 08:29:49.079883    4310 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1025 08:29:49.085542    4310 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1025 08:29:49.085800    4310 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 08:29:49.114057    4310 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 08:29:49.114168    4310 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 08:29:49.172553    4310 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-25 08:29:49.16316967 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 08:29:49.172660    4310 docker.go:318] overlay module found
	I1025 08:29:49.175627    4310 out.go:99] Using the docker driver based on user configuration
	I1025 08:29:49.175663    4310 start.go:305] selected driver: docker
	I1025 08:29:49.175670    4310 start.go:925] validating driver "docker" against <nil>
	I1025 08:29:49.175771    4310 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 08:29:49.230342    4310 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-25 08:29:49.221513557 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 08:29:49.230526    4310 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 08:29:49.230862    4310 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1025 08:29:49.231016    4310 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1025 08:29:49.234103    4310 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-661693 host does not exist
	  To start a cluster, run: "minikube start -p download-only-661693"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-661693
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.62s)

                                                
                                                
=== RUN   TestBinaryMirror
I1025 08:29:54.312636    4110 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-223268 --alsologtostderr --binary-mirror http://127.0.0.1:41137 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-223268" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-223268
--- PASS: TestBinaryMirror (0.62s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.1s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-468341
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-468341: exit status 85 (98.451721ms)

                                                
                                                
-- stdout --
	* Profile "addons-468341" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-468341"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.10s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.11s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-468341
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-468341: exit status 85 (108.074004ms)

                                                
                                                
-- stdout --
	* Profile "addons-468341" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-468341"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.11s)

                                                
                                    
x
+
TestAddons/Setup (181.3s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-468341 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-468341 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m1.295838691s)
--- PASS: TestAddons/Setup (181.30s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.23s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-468341 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-468341 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.23s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.81s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-468341 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-468341 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [b2c51099-ed01-490b-8beb-a2b7dda7feb9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [b2c51099-ed01-490b-8beb-a2b7dda7feb9] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003112719s
addons_test.go:694: (dbg) Run:  kubectl --context addons-468341 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-468341 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-468341 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-468341 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.81s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.41s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-468341
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-468341: (12.142707988s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-468341
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-468341
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-468341
--- PASS: TestAddons/StoppedEnableDisable (12.41s)

                                                
                                    
x
+
TestCertOptions (38.79s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-483456 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-483456 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (35.898434995s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-483456 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-483456 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-483456 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-483456" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-483456
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-483456: (2.071766334s)
--- PASS: TestCertOptions (38.79s)

                                                
                                    
x
+
TestCertExpiration (243.95s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-440252 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-440252 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (37.807902162s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-440252 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-440252 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (23.21044675s)
helpers_test.go:175: Cleaning up "cert-expiration-440252" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-440252
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-440252: (2.932511453s)
--- PASS: TestCertExpiration (243.95s)

                                                
                                    
x
+
TestForceSystemdFlag (45.74s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-100847 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1025 09:27:57.139590    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-100847 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (42.271082936s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-100847 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-100847" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-100847
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-100847: (3.011708546s)
--- PASS: TestForceSystemdFlag (45.74s)

                                                
                                    
x
+
TestForceSystemdEnv (46.84s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-991333 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-991333 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (43.966196702s)
helpers_test.go:175: Cleaning up "force-systemd-env-991333" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-991333
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-991333: (2.869057542s)
--- PASS: TestForceSystemdEnv (46.84s)

                                                
                                    
x
+
TestErrorSpam/setup (32.55s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-238487 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-238487 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-238487 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-238487 --driver=docker  --container-runtime=crio: (32.554729385s)
--- PASS: TestErrorSpam/setup (32.55s)

                                                
                                    
x
+
TestErrorSpam/start (0.8s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-238487 --log_dir /tmp/nospam-238487 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-238487 --log_dir /tmp/nospam-238487 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-238487 --log_dir /tmp/nospam-238487 start --dry-run
--- PASS: TestErrorSpam/start (0.80s)

                                                
                                    
x
+
TestErrorSpam/status (1.14s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-238487 --log_dir /tmp/nospam-238487 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-238487 --log_dir /tmp/nospam-238487 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-238487 --log_dir /tmp/nospam-238487 status
--- PASS: TestErrorSpam/status (1.14s)

                                                
                                    
x
+
TestErrorSpam/pause (5.65s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-238487 --log_dir /tmp/nospam-238487 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-238487 --log_dir /tmp/nospam-238487 pause: exit status 80 (1.976323618s)

                                                
                                                
-- stdout --
	* Pausing node nospam-238487 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:37:01Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-238487 --log_dir /tmp/nospam-238487 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-238487 --log_dir /tmp/nospam-238487 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-238487 --log_dir /tmp/nospam-238487 pause: exit status 80 (1.514414409s)

                                                
                                                
-- stdout --
	* Pausing node nospam-238487 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:37:03Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-238487 --log_dir /tmp/nospam-238487 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-238487 --log_dir /tmp/nospam-238487 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-238487 --log_dir /tmp/nospam-238487 pause: exit status 80 (2.163331974s)

                                                
                                                
-- stdout --
	* Pausing node nospam-238487 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:37:05Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-238487 --log_dir /tmp/nospam-238487 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (5.65s)

                                                
                                    
x
+
TestErrorSpam/unpause (6.03s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-238487 --log_dir /tmp/nospam-238487 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-238487 --log_dir /tmp/nospam-238487 unpause: exit status 80 (1.828668703s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-238487 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:37:07Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-238487 --log_dir /tmp/nospam-238487 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-238487 --log_dir /tmp/nospam-238487 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-238487 --log_dir /tmp/nospam-238487 unpause: exit status 80 (2.326508239s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-238487 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:37:09Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-238487 --log_dir /tmp/nospam-238487 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-238487 --log_dir /tmp/nospam-238487 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-238487 --log_dir /tmp/nospam-238487 unpause: exit status 80 (1.871524678s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-238487 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T08:37:11Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-238487 --log_dir /tmp/nospam-238487 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (6.03s)

                                                
                                    
x
+
TestErrorSpam/stop (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-238487 --log_dir /tmp/nospam-238487 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-238487 --log_dir /tmp/nospam-238487 stop: (1.326973184s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-238487 --log_dir /tmp/nospam-238487 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-238487 --log_dir /tmp/nospam-238487 stop
--- PASS: TestErrorSpam/stop (1.52s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21796-2312/.minikube/files/etc/test/nested/copy/4110/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (78.55s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-562171 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1025 08:37:57.140037    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:37:57.146390    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:37:57.157723    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:37:57.179122    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:37:57.220487    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:37:57.301940    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:37:57.463570    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:37:57.785287    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:37:58.427386    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:37:59.708852    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:38:02.270291    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:38:07.391664    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:38:17.634805    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-562171 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m18.551180532s)
--- PASS: TestFunctional/serial/StartWithProxy (78.55s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.11s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1025 08:38:36.020887    4110 config.go:182] Loaded profile config "functional-562171": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-562171 --alsologtostderr -v=8
E1025 08:38:38.116494    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-562171 --alsologtostderr -v=8: (29.111393832s)
functional_test.go:678: soft start took 29.111877808s for "functional-562171" cluster.
I1025 08:39:05.132558    4110 config.go:182] Loaded profile config "functional-562171": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (29.11s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-562171 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.48s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-562171 cache add registry.k8s.io/pause:3.1: (1.165439698s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-562171 cache add registry.k8s.io/pause:3.3: (1.167559007s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-562171 cache add registry.k8s.io/pause:latest: (1.146847631s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.48s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-562171 /tmp/TestFunctionalserialCacheCmdcacheadd_local3208660002/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 cache add minikube-local-cache-test:functional-562171
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 cache delete minikube-local-cache-test:functional-562171
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-562171
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.83s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-562171 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (293.995588ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.83s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 kubectl -- --context functional-562171 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-562171 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (53.26s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-562171 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1025 08:39:19.077942    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-562171 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (53.261053248s)
functional_test.go:776: restart took 53.261152055s for "functional-562171" cluster.
I1025 08:40:05.800204    4110 config.go:182] Loaded profile config "functional-562171": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (53.26s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-562171 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-562171 logs: (1.474096696s)
--- PASS: TestFunctional/serial/LogsCmd (1.47s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 logs --file /tmp/TestFunctionalserialLogsFileCmd775732147/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-562171 logs --file /tmp/TestFunctionalserialLogsFileCmd775732147/001/logs.txt: (1.461674363s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.46s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.54s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-562171 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-562171
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-562171: exit status 115 (392.43354ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30896 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-562171 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.54s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-562171 config get cpus: exit status 14 (79.686996ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-562171 config get cpus: exit status 14 (70.43368ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-562171 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-562171 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 32123: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.43s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-562171 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-562171 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (224.761207ms)

                                                
                                                
-- stdout --
	* [functional-562171] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21796
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21796-2312/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-2312/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 08:50:33.835543   29852 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:50:33.835785   29852 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:50:33.835815   29852 out.go:374] Setting ErrFile to fd 2...
	I1025 08:50:33.835855   29852 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:50:33.836147   29852 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-2312/.minikube/bin
	I1025 08:50:33.836558   29852 out.go:368] Setting JSON to false
	I1025 08:50:33.837431   29852 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":1985,"bootTime":1761380249,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1025 08:50:33.837530   29852 start.go:141] virtualization:  
	I1025 08:50:33.841137   29852 out.go:179] * [functional-562171] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 08:50:33.844835   29852 out.go:179]   - MINIKUBE_LOCATION=21796
	I1025 08:50:33.844908   29852 notify.go:220] Checking for updates...
	I1025 08:50:33.850317   29852 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 08:50:33.853185   29852 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21796-2312/kubeconfig
	I1025 08:50:33.856043   29852 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-2312/.minikube
	I1025 08:50:33.858988   29852 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 08:50:33.861835   29852 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 08:50:33.865090   29852 config.go:182] Loaded profile config "functional-562171": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:50:33.865707   29852 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 08:50:33.905461   29852 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 08:50:33.905576   29852 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 08:50:33.982042   29852 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 08:50:33.967664577 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 08:50:33.982154   29852 docker.go:318] overlay module found
	I1025 08:50:33.985259   29852 out.go:179] * Using the docker driver based on existing profile
	I1025 08:50:33.988217   29852 start.go:305] selected driver: docker
	I1025 08:50:33.988238   29852 start.go:925] validating driver "docker" against &{Name:functional-562171 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-562171 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 08:50:33.988327   29852 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 08:50:33.991958   29852 out.go:203] 
	W1025 08:50:33.998333   29852 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1025 08:50:34.003140   29852 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-562171 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-562171 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-562171 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (269.487074ms)

                                                
                                                
-- stdout --
	* [functional-562171] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21796
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21796-2312/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-2312/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 08:50:46.858835   31693 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:50:46.859172   31693 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:50:46.859187   31693 out.go:374] Setting ErrFile to fd 2...
	I1025 08:50:46.859193   31693 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:50:46.859610   31693 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-2312/.minikube/bin
	I1025 08:50:46.860065   31693 out.go:368] Setting JSON to false
	I1025 08:50:46.862742   31693 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":1998,"bootTime":1761380249,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1025 08:50:46.862851   31693 start.go:141] virtualization:  
	I1025 08:50:46.867283   31693 out.go:179] * [functional-562171] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1025 08:50:46.870361   31693 notify.go:220] Checking for updates...
	I1025 08:50:46.876761   31693 out.go:179]   - MINIKUBE_LOCATION=21796
	I1025 08:50:46.879713   31693 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 08:50:46.882725   31693 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21796-2312/kubeconfig
	I1025 08:50:46.885663   31693 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-2312/.minikube
	I1025 08:50:46.888548   31693 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 08:50:46.891446   31693 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 08:50:46.894811   31693 config.go:182] Loaded profile config "functional-562171": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:50:46.895542   31693 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 08:50:46.928985   31693 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 08:50:46.929104   31693 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 08:50:47.017587   31693 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 08:50:47.003988284 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 08:50:47.017766   31693 docker.go:318] overlay module found
	I1025 08:50:47.021013   31693 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1025 08:50:47.024063   31693 start.go:305] selected driver: docker
	I1025 08:50:47.024088   31693 start.go:925] validating driver "docker" against &{Name:functional-562171 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-562171 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 08:50:47.024218   31693 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 08:50:47.028169   31693 out.go:203] 
	W1025 08:50:47.031115   31693 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1025 08:50:47.034101   31693 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [5200b4f3-8077-4afd-a312-bbcd3f6ae29d] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003696931s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-562171 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-562171 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-562171 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-562171 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [05eab6f0-9d0a-4a37-9918-fceecfa0b4af] Pending
helpers_test.go:352: "sp-pod" [05eab6f0-9d0a-4a37-9918-fceecfa0b4af] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [05eab6f0-9d0a-4a37-9918-fceecfa0b4af] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003735023s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-562171 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-562171 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-562171 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [eca64e6f-6078-43cd-9990-649a58a91d9e] Pending
helpers_test.go:352: "sp-pod" [eca64e6f-6078-43cd-9990-649a58a91d9e] Running
E1025 08:40:41.004038    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003429185s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-562171 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.64s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 ssh -n functional-562171 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 cp functional-562171:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1907050037/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 ssh -n functional-562171 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 ssh -n functional-562171 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.05s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/4110/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 ssh "sudo cat /etc/test/nested/copy/4110/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/4110.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 ssh "sudo cat /etc/ssl/certs/4110.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/4110.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 ssh "sudo cat /usr/share/ca-certificates/4110.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/41102.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 ssh "sudo cat /etc/ssl/certs/41102.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/41102.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 ssh "sudo cat /usr/share/ca-certificates/41102.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.20s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-562171 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-562171 ssh "sudo systemctl is-active docker": exit status 1 (356.984028ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-562171 ssh "sudo systemctl is-active containerd": exit status 1 (355.776042ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 image ls --format short --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-arm64 -p functional-562171 image ls --format short --alsologtostderr: (1.873832795s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-562171 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-562171 image ls --format short --alsologtostderr:
I1025 08:50:54.751400   33039 out.go:360] Setting OutFile to fd 1 ...
I1025 08:50:54.752003   33039 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 08:50:54.752035   33039 out.go:374] Setting ErrFile to fd 2...
I1025 08:50:54.752054   33039 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 08:50:54.752352   33039 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-2312/.minikube/bin
I1025 08:50:54.752998   33039 config.go:182] Loaded profile config "functional-562171": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 08:50:54.753151   33039 config.go:182] Loaded profile config "functional-562171": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 08:50:54.753629   33039 cli_runner.go:164] Run: docker container inspect functional-562171 --format={{.State.Status}}
I1025 08:50:54.771952   33039 ssh_runner.go:195] Run: systemctl --version
I1025 08:50:54.772005   33039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562171
I1025 08:50:54.791405   33039 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/functional-562171/id_rsa Username:docker}
I1025 08:50:54.904893   33039 ssh_runner.go:195] Run: sudo crictl images --output json
I1025 08:50:56.541033   33039 ssh_runner.go:235] Completed: sudo crictl images --output json: (1.636112931s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-562171 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/library/nginx                 │ alpine             │ 9c92f55c0336c │ 54.7MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ 43911e833d64d │ 84.8MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ a1894772a478e │ 206MB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ 7eb2c6ff0c5a7 │ 72.6MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ b5f57ec6b9867 │ 51.6MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ docker.io/library/nginx                 │ latest             │ e612b97116b41 │ 176MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ 05baa95f5142d │ 75.9MB │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-562171 image ls --format table --alsologtostderr:
I1025 08:50:58.992550   33289 out.go:360] Setting OutFile to fd 1 ...
I1025 08:50:58.992719   33289 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 08:50:58.992749   33289 out.go:374] Setting ErrFile to fd 2...
I1025 08:50:58.992770   33289 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 08:50:58.993035   33289 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-2312/.minikube/bin
I1025 08:50:58.993658   33289 config.go:182] Loaded profile config "functional-562171": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 08:50:58.993827   33289 config.go:182] Loaded profile config "functional-562171": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 08:50:58.994359   33289 cli_runner.go:164] Run: docker container inspect functional-562171 --format={{.State.Status}}
I1025 08:50:59.021346   33289 ssh_runner.go:195] Run: systemctl --version
I1025 08:50:59.021417   33289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562171
I1025 08:50:59.040083   33289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/functional-562171/id_rsa Username:docker}
I1025 08:50:59.144884   33289 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-562171 image ls --format json --alsologtostderr:
[{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6","registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"75938711"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde
6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902","registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"84753391"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDige
sts":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"9c92f55c0336c2597a5b458ba84a3fd242b209d8b5079443646a0d269df0d4aa","repoDigests":["docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0","docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54704654"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"b5f57ec6b98676d815366685a0422bd164ecf0732540b
79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500","registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"51592017"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["
docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"e612b97116b41d24816faa9fd204e1177027648a2cb14bb627dd1eaab1494e8f","repoDigests":["docker.io/library/nginx@sha256:029d4461bd98f124e531380505ceea2072418fdf28752aa73b7b273ba3048903","docker.io/library/nginx@sha256:68e62e210589c349f01d82308b45fbd6fb9b855f8b12cb27e11ad48dbfd0e43f"],"repoTags":["docker.io/library/nginx:latest"],"size":"176071022"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"a1894772a478e07c67a56e8bf32335fd
be1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205987068"},{"id":"7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f","registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"72629077"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-562171 image ls --format json --alsologtostderr:
I1025 08:50:58.758746   33251 out.go:360] Setting OutFile to fd 1 ...
I1025 08:50:58.758878   33251 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 08:50:58.758889   33251 out.go:374] Setting ErrFile to fd 2...
I1025 08:50:58.758894   33251 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 08:50:58.759157   33251 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-2312/.minikube/bin
I1025 08:50:58.759809   33251 config.go:182] Loaded profile config "functional-562171": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 08:50:58.759968   33251 config.go:182] Loaded profile config "functional-562171": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 08:50:58.760478   33251 cli_runner.go:164] Run: docker container inspect functional-562171 --format={{.State.Status}}
I1025 08:50:58.778565   33251 ssh_runner.go:195] Run: systemctl --version
I1025 08:50:58.778624   33251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562171
I1025 08:50:58.798499   33251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/functional-562171/id_rsa Username:docker}
I1025 08:50:58.904569   33251 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-562171 image ls --format yaml --alsologtostderr:
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 9c92f55c0336c2597a5b458ba84a3fd242b209d8b5079443646a0d269df0d4aa
repoDigests:
- docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0
- docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22
repoTags:
- docker.io/library/nginx:alpine
size: "54704654"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205987068"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "72629077"
- id: 05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "75938711"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
- registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "51592017"
- id: e612b97116b41d24816faa9fd204e1177027648a2cb14bb627dd1eaab1494e8f
repoDigests:
- docker.io/library/nginx@sha256:029d4461bd98f124e531380505ceea2072418fdf28752aa73b7b273ba3048903
- docker.io/library/nginx@sha256:68e62e210589c349f01d82308b45fbd6fb9b855f8b12cb27e11ad48dbfd0e43f
repoTags:
- docker.io/library/nginx:latest
size: "176071022"
- id: 43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
- registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "84753391"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-562171 image ls --format yaml --alsologtostderr:
I1025 08:50:58.524655   33215 out.go:360] Setting OutFile to fd 1 ...
I1025 08:50:58.524869   33215 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 08:50:58.524897   33215 out.go:374] Setting ErrFile to fd 2...
I1025 08:50:58.524917   33215 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 08:50:58.525216   33215 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-2312/.minikube/bin
I1025 08:50:58.525902   33215 config.go:182] Loaded profile config "functional-562171": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 08:50:58.526094   33215 config.go:182] Loaded profile config "functional-562171": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 08:50:58.526585   33215 cli_runner.go:164] Run: docker container inspect functional-562171 --format={{.State.Status}}
I1025 08:50:58.544293   33215 ssh_runner.go:195] Run: systemctl --version
I1025 08:50:58.544342   33215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562171
I1025 08:50:58.561330   33215 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/functional-562171/id_rsa Username:docker}
I1025 08:50:58.668467   33215 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-562171 ssh pgrep buildkitd: exit status 1 (348.025195ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 image build -t localhost/my-image:functional-562171 testdata/build --alsologtostderr
2025/10/25 08:50:58 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-562171 image build -t localhost/my-image:functional-562171 testdata/build --alsologtostderr: (3.883016406s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-562171 image build -t localhost/my-image:functional-562171 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 72c1ecfbdc6
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-562171
--> b68a4c60c4b
Successfully tagged localhost/my-image:functional-562171
b68a4c60c4b409bc679b90d7ae574813b8849e8900733c7ad4b4f76282204c57
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-562171 image build -t localhost/my-image:functional-562171 testdata/build --alsologtostderr:
I1025 08:50:56.964202   33160 out.go:360] Setting OutFile to fd 1 ...
I1025 08:50:56.964476   33160 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 08:50:56.964507   33160 out.go:374] Setting ErrFile to fd 2...
I1025 08:50:56.964526   33160 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 08:50:56.964819   33160 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-2312/.minikube/bin
I1025 08:50:56.965569   33160 config.go:182] Loaded profile config "functional-562171": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 08:50:56.966616   33160 config.go:182] Loaded profile config "functional-562171": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 08:50:56.968011   33160 cli_runner.go:164] Run: docker container inspect functional-562171 --format={{.State.Status}}
I1025 08:50:56.999152   33160 ssh_runner.go:195] Run: systemctl --version
I1025 08:50:56.999226   33160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562171
I1025 08:50:57.018824   33160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/functional-562171/id_rsa Username:docker}
I1025 08:50:57.141117   33160 build_images.go:161] Building image from path: /tmp/build.766265043.tar
I1025 08:50:57.141218   33160 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1025 08:50:57.153390   33160 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.766265043.tar
I1025 08:50:57.161468   33160 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.766265043.tar: stat -c "%s %y" /var/lib/minikube/build/build.766265043.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.766265043.tar': No such file or directory
I1025 08:50:57.161539   33160 ssh_runner.go:362] scp /tmp/build.766265043.tar --> /var/lib/minikube/build/build.766265043.tar (3072 bytes)
I1025 08:50:57.181952   33160 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.766265043
I1025 08:50:57.190913   33160 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.766265043 -xf /var/lib/minikube/build/build.766265043.tar
I1025 08:50:57.199824   33160 crio.go:315] Building image: /var/lib/minikube/build/build.766265043
I1025 08:50:57.199957   33160 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-562171 /var/lib/minikube/build/build.766265043 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1025 08:51:00.756130   33160 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-562171 /var/lib/minikube/build/build.766265043 --cgroup-manager=cgroupfs: (3.556132033s)
I1025 08:51:00.756194   33160 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.766265043
I1025 08:51:00.764380   33160 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.766265043.tar
I1025 08:51:00.772658   33160 build_images.go:217] Built localhost/my-image:functional-562171 from /tmp/build.766265043.tar
I1025 08:51:00.772688   33160 build_images.go:133] succeeded building to: functional-562171
I1025 08:51:00.772694   33160 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-562171
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "488.812911ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "65.856461ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "478.057891ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "73.238497ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 image rm kicbase/echo-server:functional-562171 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-562171 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-562171 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-562171 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 28339: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-562171 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-562171 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-562171 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [d2511d78-5f9d-46a5-8c33-94f983ded09b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [d2511d78-5f9d-46a5-8c33-94f983ded09b] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003237114s
I1025 08:40:29.472762    4110 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.35s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-562171 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.167.122 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-562171 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-562171 /tmp/TestFunctionalparallelMountCmdany-port1046072839/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1761382234275507604" to /tmp/TestFunctionalparallelMountCmdany-port1046072839/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1761382234275507604" to /tmp/TestFunctionalparallelMountCmdany-port1046072839/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1761382234275507604" to /tmp/TestFunctionalparallelMountCmdany-port1046072839/001/test-1761382234275507604
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-562171 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (353.578775ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1025 08:50:34.629323    4110 retry.go:31] will retry after 342.931418ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 25 08:50 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 25 08:50 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 25 08:50 test-1761382234275507604
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 ssh cat /mount-9p/test-1761382234275507604
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-562171 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [c6272ff8-1f89-4892-a2bd-d22db4465479] Pending
helpers_test.go:352: "busybox-mount" [c6272ff8-1f89-4892-a2bd-d22db4465479] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [c6272ff8-1f89-4892-a2bd-d22db4465479] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [c6272ff8-1f89-4892-a2bd-d22db4465479] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.020935068s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-562171 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-562171 /tmp/TestFunctionalparallelMountCmdany-port1046072839/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.81s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-562171 /tmp/TestFunctionalparallelMountCmdspecific-port3571774983/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-562171 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (334.883006ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1025 08:50:41.425264    4110 retry.go:31] will retry after 669.03533ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-562171 /tmp/TestFunctionalparallelMountCmdspecific-port3571774983/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-562171 ssh "sudo umount -f /mount-9p": exit status 1 (276.490873ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-562171 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-562171 /tmp/TestFunctionalparallelMountCmdspecific-port3571774983/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.07s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-562171 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3578841691/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-562171 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3578841691/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-562171 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3578841691/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-562171 ssh "findmnt -T" /mount1: exit status 1 (635.29895ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1025 08:50:43.796701    4110 retry.go:31] will retry after 614.081999ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-562171 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-562171 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3578841691/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-562171 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3578841691/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-562171 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3578841691/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-562171 service list -o json
functional_test.go:1504: Took "634.623194ms" to run "out/minikube-linux-arm64 -p functional-562171 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.63s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-562171
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-562171
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-562171
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (208.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1025 08:52:57.140595    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:54:20.207541    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-098762 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m27.791631478s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (208.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-098762 kubectl -- rollout status deployment/busybox: (4.024346893s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 kubectl -- exec busybox-7b57f96db7-8rg6g -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 kubectl -- exec busybox-7b57f96db7-fcwjx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 kubectl -- exec busybox-7b57f96db7-mhnb2 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 kubectl -- exec busybox-7b57f96db7-8rg6g -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 kubectl -- exec busybox-7b57f96db7-fcwjx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 kubectl -- exec busybox-7b57f96db7-mhnb2 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 kubectl -- exec busybox-7b57f96db7-8rg6g -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 kubectl -- exec busybox-7b57f96db7-fcwjx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 kubectl -- exec busybox-7b57f96db7-mhnb2 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 kubectl -- exec busybox-7b57f96db7-8rg6g -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 kubectl -- exec busybox-7b57f96db7-8rg6g -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 kubectl -- exec busybox-7b57f96db7-fcwjx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 kubectl -- exec busybox-7b57f96db7-fcwjx -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 kubectl -- exec busybox-7b57f96db7-mhnb2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 kubectl -- exec busybox-7b57f96db7-mhnb2 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (61.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 node add --alsologtostderr -v 5
E1025 08:55:21.120949    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/functional-562171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:55:21.127524    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/functional-562171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:55:21.139276    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/functional-562171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:55:21.160885    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/functional-562171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:55:21.202400    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/functional-562171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:55:21.283989    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/functional-562171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:55:21.445444    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/functional-562171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:55:21.767232    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/functional-562171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:55:22.409035    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/functional-562171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:55:23.690470    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/functional-562171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:55:26.253327    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/functional-562171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:55:31.375088    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/functional-562171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:55:41.616518    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/functional-562171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-098762 node add --alsologtostderr -v 5: (1m0.895549806s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-098762 status --alsologtostderr -v 5: (1.093532139s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (61.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-098762 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.080389361s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-098762 status --output json --alsologtostderr -v 5: (1.191712074s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 cp testdata/cp-test.txt ha-098762:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 ssh -n ha-098762 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 cp ha-098762:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1508369442/001/cp-test_ha-098762.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 ssh -n ha-098762 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 cp ha-098762:/home/docker/cp-test.txt ha-098762-m02:/home/docker/cp-test_ha-098762_ha-098762-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 ssh -n ha-098762 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 ssh -n ha-098762-m02 "sudo cat /home/docker/cp-test_ha-098762_ha-098762-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 cp ha-098762:/home/docker/cp-test.txt ha-098762-m03:/home/docker/cp-test_ha-098762_ha-098762-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 ssh -n ha-098762 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 ssh -n ha-098762-m03 "sudo cat /home/docker/cp-test_ha-098762_ha-098762-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 cp ha-098762:/home/docker/cp-test.txt ha-098762-m04:/home/docker/cp-test_ha-098762_ha-098762-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 ssh -n ha-098762 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 ssh -n ha-098762-m04 "sudo cat /home/docker/cp-test_ha-098762_ha-098762-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 cp testdata/cp-test.txt ha-098762-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 ssh -n ha-098762-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 cp ha-098762-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1508369442/001/cp-test_ha-098762-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 ssh -n ha-098762-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 cp ha-098762-m02:/home/docker/cp-test.txt ha-098762:/home/docker/cp-test_ha-098762-m02_ha-098762.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 ssh -n ha-098762-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 ssh -n ha-098762 "sudo cat /home/docker/cp-test_ha-098762-m02_ha-098762.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 cp ha-098762-m02:/home/docker/cp-test.txt ha-098762-m03:/home/docker/cp-test_ha-098762-m02_ha-098762-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 ssh -n ha-098762-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 ssh -n ha-098762-m03 "sudo cat /home/docker/cp-test_ha-098762-m02_ha-098762-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 cp ha-098762-m02:/home/docker/cp-test.txt ha-098762-m04:/home/docker/cp-test_ha-098762-m02_ha-098762-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 ssh -n ha-098762-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 ssh -n ha-098762-m04 "sudo cat /home/docker/cp-test_ha-098762-m02_ha-098762-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 cp testdata/cp-test.txt ha-098762-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 ssh -n ha-098762-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 cp ha-098762-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1508369442/001/cp-test_ha-098762-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 ssh -n ha-098762-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 cp ha-098762-m03:/home/docker/cp-test.txt ha-098762:/home/docker/cp-test_ha-098762-m03_ha-098762.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 ssh -n ha-098762-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 ssh -n ha-098762 "sudo cat /home/docker/cp-test_ha-098762-m03_ha-098762.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 cp ha-098762-m03:/home/docker/cp-test.txt ha-098762-m02:/home/docker/cp-test_ha-098762-m03_ha-098762-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 ssh -n ha-098762-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 ssh -n ha-098762-m02 "sudo cat /home/docker/cp-test_ha-098762-m03_ha-098762-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 cp ha-098762-m03:/home/docker/cp-test.txt ha-098762-m04:/home/docker/cp-test_ha-098762-m03_ha-098762-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 ssh -n ha-098762-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 ssh -n ha-098762-m04 "sudo cat /home/docker/cp-test_ha-098762-m03_ha-098762-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 cp testdata/cp-test.txt ha-098762-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 ssh -n ha-098762-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 cp ha-098762-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1508369442/001/cp-test_ha-098762-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 ssh -n ha-098762-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 cp ha-098762-m04:/home/docker/cp-test.txt ha-098762:/home/docker/cp-test_ha-098762-m04_ha-098762.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 ssh -n ha-098762-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 ssh -n ha-098762 "sudo cat /home/docker/cp-test_ha-098762-m04_ha-098762.txt"
E1025 08:56:02.097934    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/functional-562171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 cp ha-098762-m04:/home/docker/cp-test.txt ha-098762-m02:/home/docker/cp-test_ha-098762-m04_ha-098762-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 ssh -n ha-098762-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 ssh -n ha-098762-m02 "sudo cat /home/docker/cp-test_ha-098762-m04_ha-098762-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 cp ha-098762-m04:/home/docker/cp-test.txt ha-098762-m03:/home/docker/cp-test_ha-098762-m04_ha-098762-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 ssh -n ha-098762-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 ssh -n ha-098762-m03 "sudo cat /home/docker/cp-test_ha-098762-m04_ha-098762-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-098762 node stop m02 --alsologtostderr -v 5: (12.069471974s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-098762 status --alsologtostderr -v 5: exit status 7 (832.322067ms)

                                                
                                                
-- stdout --
	ha-098762
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-098762-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-098762-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-098762-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 08:56:16.645765   48076 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:56:16.646265   48076 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:56:16.646280   48076 out.go:374] Setting ErrFile to fd 2...
	I1025 08:56:16.646285   48076 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:56:16.646593   48076 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-2312/.minikube/bin
	I1025 08:56:16.647529   48076 out.go:368] Setting JSON to false
	I1025 08:56:16.647567   48076 mustload.go:65] Loading cluster: ha-098762
	I1025 08:56:16.647623   48076 notify.go:220] Checking for updates...
	I1025 08:56:16.647991   48076 config.go:182] Loaded profile config "ha-098762": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:56:16.648010   48076 status.go:174] checking status of ha-098762 ...
	I1025 08:56:16.648674   48076 cli_runner.go:164] Run: docker container inspect ha-098762 --format={{.State.Status}}
	I1025 08:56:16.672008   48076 status.go:371] ha-098762 host status = "Running" (err=<nil>)
	I1025 08:56:16.672035   48076 host.go:66] Checking if "ha-098762" exists ...
	I1025 08:56:16.672332   48076 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-098762
	I1025 08:56:16.694858   48076 host.go:66] Checking if "ha-098762" exists ...
	I1025 08:56:16.695254   48076 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 08:56:16.695328   48076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-098762
	I1025 08:56:16.734765   48076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/ha-098762/id_rsa Username:docker}
	I1025 08:56:16.845042   48076 ssh_runner.go:195] Run: systemctl --version
	I1025 08:56:16.851680   48076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 08:56:16.869648   48076 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 08:56:16.965279   48076 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-10-25 08:56:16.955018578 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 08:56:16.966057   48076 kubeconfig.go:125] found "ha-098762" server: "https://192.168.49.254:8443"
	I1025 08:56:16.966155   48076 api_server.go:166] Checking apiserver status ...
	I1025 08:56:16.966209   48076 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 08:56:16.978822   48076 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1258/cgroup
	I1025 08:56:16.986919   48076 api_server.go:182] apiserver freezer: "9:freezer:/docker/73f7524564b9a0a6e31d25d63dda68b6658c0540f2643520f926cd00563a8d08/crio/crio-e84af6baab6264a13eb70b4d3accaf944a4b866e9e5133e4c854da62ce8a56f0"
	I1025 08:56:16.987009   48076 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/73f7524564b9a0a6e31d25d63dda68b6658c0540f2643520f926cd00563a8d08/crio/crio-e84af6baab6264a13eb70b4d3accaf944a4b866e9e5133e4c854da62ce8a56f0/freezer.state
	I1025 08:56:16.995013   48076 api_server.go:204] freezer state: "THAWED"
	I1025 08:56:16.995039   48076 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1025 08:56:17.003774   48076 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1025 08:56:17.003810   48076 status.go:463] ha-098762 apiserver status = Running (err=<nil>)
	I1025 08:56:17.003823   48076 status.go:176] ha-098762 status: &{Name:ha-098762 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 08:56:17.003843   48076 status.go:174] checking status of ha-098762-m02 ...
	I1025 08:56:17.004167   48076 cli_runner.go:164] Run: docker container inspect ha-098762-m02 --format={{.State.Status}}
	I1025 08:56:17.021642   48076 status.go:371] ha-098762-m02 host status = "Stopped" (err=<nil>)
	I1025 08:56:17.021666   48076 status.go:384] host is not running, skipping remaining checks
	I1025 08:56:17.021673   48076 status.go:176] ha-098762-m02 status: &{Name:ha-098762-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 08:56:17.021692   48076 status.go:174] checking status of ha-098762-m03 ...
	I1025 08:56:17.022084   48076 cli_runner.go:164] Run: docker container inspect ha-098762-m03 --format={{.State.Status}}
	I1025 08:56:17.039274   48076 status.go:371] ha-098762-m03 host status = "Running" (err=<nil>)
	I1025 08:56:17.039305   48076 host.go:66] Checking if "ha-098762-m03" exists ...
	I1025 08:56:17.039604   48076 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-098762-m03
	I1025 08:56:17.056712   48076 host.go:66] Checking if "ha-098762-m03" exists ...
	I1025 08:56:17.057089   48076 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 08:56:17.057138   48076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-098762-m03
	I1025 08:56:17.074592   48076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/ha-098762-m03/id_rsa Username:docker}
	I1025 08:56:17.179542   48076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 08:56:17.194335   48076 kubeconfig.go:125] found "ha-098762" server: "https://192.168.49.254:8443"
	I1025 08:56:17.194361   48076 api_server.go:166] Checking apiserver status ...
	I1025 08:56:17.194425   48076 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 08:56:17.205603   48076 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1194/cgroup
	I1025 08:56:17.213830   48076 api_server.go:182] apiserver freezer: "9:freezer:/docker/a13a535f4abbce1551ea457e6f0b1337bd1c7ef841668a407ebcd7ae7a7403c7/crio/crio-0277825cf54371c27008c597abe93815cf76d86c5f295688a74d8475619370bc"
	I1025 08:56:17.213917   48076 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a13a535f4abbce1551ea457e6f0b1337bd1c7ef841668a407ebcd7ae7a7403c7/crio/crio-0277825cf54371c27008c597abe93815cf76d86c5f295688a74d8475619370bc/freezer.state
	I1025 08:56:17.221615   48076 api_server.go:204] freezer state: "THAWED"
	I1025 08:56:17.221691   48076 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1025 08:56:17.230033   48076 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1025 08:56:17.230064   48076 status.go:463] ha-098762-m03 apiserver status = Running (err=<nil>)
	I1025 08:56:17.230086   48076 status.go:176] ha-098762-m03 status: &{Name:ha-098762-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 08:56:17.230108   48076 status.go:174] checking status of ha-098762-m04 ...
	I1025 08:56:17.230430   48076 cli_runner.go:164] Run: docker container inspect ha-098762-m04 --format={{.State.Status}}
	I1025 08:56:17.255722   48076 status.go:371] ha-098762-m04 host status = "Running" (err=<nil>)
	I1025 08:56:17.255760   48076 host.go:66] Checking if "ha-098762-m04" exists ...
	I1025 08:56:17.256062   48076 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-098762-m04
	I1025 08:56:17.274504   48076 host.go:66] Checking if "ha-098762-m04" exists ...
	I1025 08:56:17.274855   48076 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 08:56:17.274904   48076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-098762-m04
	I1025 08:56:17.293552   48076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/ha-098762-m04/id_rsa Username:docker}
	I1025 08:56:17.396519   48076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 08:56:17.411965   48076 status.go:176] ha-098762-m04 status: &{Name:ha-098762-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (30.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 node start m02 --alsologtostderr -v 5
E1025 08:56:43.059423    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/functional-562171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-098762 node start m02 --alsologtostderr -v 5: (28.931651109s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-098762 status --alsologtostderr -v 5: (1.312612275s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (30.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.408211315s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (133.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-098762 stop --alsologtostderr -v 5: (37.794091071s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 start --wait true --alsologtostderr -v 5
E1025 08:57:57.143921    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 08:58:04.982158    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/functional-562171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-098762 start --wait true --alsologtostderr -v 5: (1m35.706072062s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (133.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-098762 node delete m03 --alsologtostderr -v 5: (11.019877114s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-098762 stop --alsologtostderr -v 5: (36.143002101s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-098762 status --alsologtostderr -v 5: exit status 7 (115.699751ms)

                                                
                                                
-- stdout --
	ha-098762
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-098762-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-098762-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 08:59:52.735599   60162 out.go:360] Setting OutFile to fd 1 ...
	I1025 08:59:52.735812   60162 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:59:52.735839   60162 out.go:374] Setting ErrFile to fd 2...
	I1025 08:59:52.735859   60162 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 08:59:52.736131   60162 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-2312/.minikube/bin
	I1025 08:59:52.736348   60162 out.go:368] Setting JSON to false
	I1025 08:59:52.736406   60162 mustload.go:65] Loading cluster: ha-098762
	I1025 08:59:52.736435   60162 notify.go:220] Checking for updates...
	I1025 08:59:52.736861   60162 config.go:182] Loaded profile config "ha-098762": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 08:59:52.737163   60162 status.go:174] checking status of ha-098762 ...
	I1025 08:59:52.737741   60162 cli_runner.go:164] Run: docker container inspect ha-098762 --format={{.State.Status}}
	I1025 08:59:52.755782   60162 status.go:371] ha-098762 host status = "Stopped" (err=<nil>)
	I1025 08:59:52.755803   60162 status.go:384] host is not running, skipping remaining checks
	I1025 08:59:52.755810   60162 status.go:176] ha-098762 status: &{Name:ha-098762 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 08:59:52.755841   60162 status.go:174] checking status of ha-098762-m02 ...
	I1025 08:59:52.756144   60162 cli_runner.go:164] Run: docker container inspect ha-098762-m02 --format={{.State.Status}}
	I1025 08:59:52.772665   60162 status.go:371] ha-098762-m02 host status = "Stopped" (err=<nil>)
	I1025 08:59:52.772684   60162 status.go:384] host is not running, skipping remaining checks
	I1025 08:59:52.772692   60162 status.go:176] ha-098762-m02 status: &{Name:ha-098762-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 08:59:52.772711   60162 status.go:174] checking status of ha-098762-m04 ...
	I1025 08:59:52.773013   60162 cli_runner.go:164] Run: docker container inspect ha-098762-m04 --format={{.State.Status}}
	I1025 08:59:52.801581   60162 status.go:371] ha-098762-m04 host status = "Stopped" (err=<nil>)
	I1025 08:59:52.801605   60162 status.go:384] host is not running, skipping remaining checks
	I1025 08:59:52.801619   60162 status.go:176] ha-098762-m04 status: &{Name:ha-098762-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (80.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1025 09:00:21.122160    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/functional-562171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:00:48.823609    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/functional-562171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-098762 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m19.330597623s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (80.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (54.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-098762 node add --control-plane --alsologtostderr -v 5: (53.372662907s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-098762 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-098762 status --alsologtostderr -v 5: (1.102772303s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (54.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.072092524s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.07s)

                                                
                                    
x
+
TestJSONOutput/start/Command (82.59s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-446585 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1025 09:02:57.141469    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-446585 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m22.584219926s)
--- PASS: TestJSONOutput/start/Command (82.59s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.85s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-446585 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-446585 --output=json --user=testUser: (5.850602498s)
--- PASS: TestJSONOutput/stop/Command (5.85s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-621394 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-621394 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (94.917125ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"62df18cc-695f-42cb-b126-d7acab6725eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-621394] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5b802662-ebfd-4ab8-b472-f5cffdca1a44","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21796"}}
	{"specversion":"1.0","id":"1053f310-3cf4-4351-a35e-06706e7a0ff3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9a8e0c14-6565-4eda-a2ca-21ab1d2ffb87","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21796-2312/kubeconfig"}}
	{"specversion":"1.0","id":"540ff906-9fbf-4e3b-baf2-92bb80065ea1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-2312/.minikube"}}
	{"specversion":"1.0","id":"6cdabd8d-e97a-4cd6-a676-dee47aa1ed33","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"43467c64-aa7e-423b-a884-7a065de81f94","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"30327a83-229b-44fc-908e-037ee811aa5e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-621394" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-621394
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (40.45s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-249652 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-249652 --network=: (38.236845791s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-249652" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-249652
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-249652: (2.189236505s)
--- PASS: TestKicCustomNetwork/create_custom_network (40.45s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (36.04s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-477194 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-477194 --network=bridge: (33.904151923s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-477194" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-477194
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-477194: (2.108517188s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (36.04s)

                                                
                                    
x
+
TestKicExistingNetwork (41.42s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1025 09:05:15.292421    4110 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1025 09:05:15.308001    4110 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1025 09:05:15.308972    4110 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1025 09:05:15.309010    4110 cli_runner.go:164] Run: docker network inspect existing-network
W1025 09:05:15.325598    4110 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1025 09:05:15.325624    4110 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1025 09:05:15.325641    4110 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1025 09:05:15.325739    4110 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1025 09:05:15.346218    4110 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-4076b76bdd01 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5e:93:ad:e4:3e:11} reservation:<nil>}
I1025 09:05:15.347134    4110 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40002d36d0}
I1025 09:05:15.347162    4110 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1025 09:05:15.347214    4110 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1025 09:05:15.406329    4110 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-405471 --network=existing-network
E1025 09:05:21.122474    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/functional-562171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-405471 --network=existing-network: (39.176699345s)
helpers_test.go:175: Cleaning up "existing-network-405471" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-405471
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-405471: (2.098022989s)
I1025 09:05:56.699443    4110 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (41.42s)

                                                
                                    
x
+
TestKicCustomSubnet (37.52s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-113553 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-113553 --subnet=192.168.60.0/24: (35.326005973s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-113553 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-113553" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-113553
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-113553: (2.168993715s)
--- PASS: TestKicCustomSubnet (37.52s)

                                                
                                    
x
+
TestKicStaticIP (38.47s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-419372 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-419372 --static-ip=192.168.200.200: (36.128814251s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-419372 ip
helpers_test.go:175: Cleaning up "static-ip-419372" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-419372
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-419372: (2.181943269s)
--- PASS: TestKicStaticIP (38.47s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (74.46s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-926887 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-926887 --driver=docker  --container-runtime=crio: (33.969477818s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-930013 --driver=docker  --container-runtime=crio
E1025 09:07:57.145010    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-930013 --driver=docker  --container-runtime=crio: (34.852047943s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-926887
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-930013
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-930013" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-930013
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-930013: (2.181668561s)
helpers_test.go:175: Cleaning up "first-926887" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-926887
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-926887: (2.019269342s)
--- PASS: TestMinikubeProfile (74.46s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.59s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-474323 --memory=3072 --mount-string /tmp/TestMountStartserial1172405287/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-474323 --memory=3072 --mount-string /tmp/TestMountStartserial1172405287/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.588737223s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-474323 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.51s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-476661 --memory=3072 --mount-string /tmp/TestMountStartserial1172405287/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-476661 --memory=3072 --mount-string /tmp/TestMountStartserial1172405287/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.51272335s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.51s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-476661 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-474323 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-474323 --alsologtostderr -v=5: (1.697762653s)
--- PASS: TestMountStart/serial/DeleteFirst (1.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-476661 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-476661
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-476661: (1.287266862s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.64s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-476661
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-476661: (6.640125938s)
--- PASS: TestMountStart/serial/RestartStopped (7.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-476661 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (140.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-357782 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1025 09:10:21.121495    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/functional-562171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:11:00.208968    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-357782 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m20.1512758s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-357782 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (140.69s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-357782 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-357782 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-357782 -- rollout status deployment/busybox: (4.118916241s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-357782 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-357782 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-357782 -- exec busybox-7b57f96db7-kgr5j -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-357782 -- exec busybox-7b57f96db7-kldj7 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-357782 -- exec busybox-7b57f96db7-kgr5j -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-357782 -- exec busybox-7b57f96db7-kldj7 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-357782 -- exec busybox-7b57f96db7-kgr5j -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-357782 -- exec busybox-7b57f96db7-kldj7 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.88s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-357782 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-357782 -- exec busybox-7b57f96db7-kgr5j -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-357782 -- exec busybox-7b57f96db7-kgr5j -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-357782 -- exec busybox-7b57f96db7-kldj7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-357782 -- exec busybox-7b57f96db7-kldj7 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.96s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (59.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-357782 -v=5 --alsologtostderr
E1025 09:11:44.185262    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/functional-562171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-357782 -v=5 --alsologtostderr: (58.675021164s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-357782 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (59.38s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-357782 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-357782 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-357782 cp testdata/cp-test.txt multinode-357782:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-357782 ssh -n multinode-357782 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-357782 cp multinode-357782:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile456100340/001/cp-test_multinode-357782.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-357782 ssh -n multinode-357782 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-357782 cp multinode-357782:/home/docker/cp-test.txt multinode-357782-m02:/home/docker/cp-test_multinode-357782_multinode-357782-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-357782 ssh -n multinode-357782 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-357782 ssh -n multinode-357782-m02 "sudo cat /home/docker/cp-test_multinode-357782_multinode-357782-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-357782 cp multinode-357782:/home/docker/cp-test.txt multinode-357782-m03:/home/docker/cp-test_multinode-357782_multinode-357782-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-357782 ssh -n multinode-357782 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-357782 ssh -n multinode-357782-m03 "sudo cat /home/docker/cp-test_multinode-357782_multinode-357782-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-357782 cp testdata/cp-test.txt multinode-357782-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-357782 ssh -n multinode-357782-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-357782 cp multinode-357782-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile456100340/001/cp-test_multinode-357782-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-357782 ssh -n multinode-357782-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-357782 cp multinode-357782-m02:/home/docker/cp-test.txt multinode-357782:/home/docker/cp-test_multinode-357782-m02_multinode-357782.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-357782 ssh -n multinode-357782-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-357782 ssh -n multinode-357782 "sudo cat /home/docker/cp-test_multinode-357782-m02_multinode-357782.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-357782 cp multinode-357782-m02:/home/docker/cp-test.txt multinode-357782-m03:/home/docker/cp-test_multinode-357782-m02_multinode-357782-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-357782 ssh -n multinode-357782-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-357782 ssh -n multinode-357782-m03 "sudo cat /home/docker/cp-test_multinode-357782-m02_multinode-357782-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-357782 cp testdata/cp-test.txt multinode-357782-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-357782 ssh -n multinode-357782-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-357782 cp multinode-357782-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile456100340/001/cp-test_multinode-357782-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-357782 ssh -n multinode-357782-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-357782 cp multinode-357782-m03:/home/docker/cp-test.txt multinode-357782:/home/docker/cp-test_multinode-357782-m03_multinode-357782.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-357782 ssh -n multinode-357782-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-357782 ssh -n multinode-357782 "sudo cat /home/docker/cp-test_multinode-357782-m03_multinode-357782.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-357782 cp multinode-357782-m03:/home/docker/cp-test.txt multinode-357782-m02:/home/docker/cp-test_multinode-357782-m03_multinode-357782-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-357782 ssh -n multinode-357782-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-357782 ssh -n multinode-357782-m02 "sudo cat /home/docker/cp-test_multinode-357782-m03_multinode-357782-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.57s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-357782 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-357782 node stop m03: (1.337329438s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-357782 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-357782 status: exit status 7 (541.939481ms)

                                                
                                                
-- stdout --
	multinode-357782
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-357782-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-357782-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-357782 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-357782 status --alsologtostderr: exit status 7 (586.250446ms)

                                                
                                                
-- stdout --
	multinode-357782
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-357782-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-357782-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:12:40.141759  110643 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:12:40.141872  110643 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:12:40.141882  110643 out.go:374] Setting ErrFile to fd 2...
	I1025 09:12:40.141888  110643 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:12:40.142205  110643 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-2312/.minikube/bin
	I1025 09:12:40.142441  110643 out.go:368] Setting JSON to false
	I1025 09:12:40.142480  110643 mustload.go:65] Loading cluster: multinode-357782
	I1025 09:12:40.142554  110643 notify.go:220] Checking for updates...
	I1025 09:12:40.143865  110643 config.go:182] Loaded profile config "multinode-357782": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:12:40.143894  110643 status.go:174] checking status of multinode-357782 ...
	I1025 09:12:40.144458  110643 cli_runner.go:164] Run: docker container inspect multinode-357782 --format={{.State.Status}}
	I1025 09:12:40.164567  110643 status.go:371] multinode-357782 host status = "Running" (err=<nil>)
	I1025 09:12:40.164592  110643 host.go:66] Checking if "multinode-357782" exists ...
	I1025 09:12:40.164916  110643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-357782
	I1025 09:12:40.194164  110643 host.go:66] Checking if "multinode-357782" exists ...
	I1025 09:12:40.194473  110643 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:12:40.194530  110643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357782
	I1025 09:12:40.215721  110643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/multinode-357782/id_rsa Username:docker}
	I1025 09:12:40.319935  110643 ssh_runner.go:195] Run: systemctl --version
	I1025 09:12:40.326332  110643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:12:40.339359  110643 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:12:40.394948  110643 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-25 09:12:40.385872599 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:12:40.395570  110643 kubeconfig.go:125] found "multinode-357782" server: "https://192.168.67.2:8443"
	I1025 09:12:40.395604  110643 api_server.go:166] Checking apiserver status ...
	I1025 09:12:40.395676  110643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:12:40.408208  110643 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1233/cgroup
	I1025 09:12:40.417218  110643 api_server.go:182] apiserver freezer: "9:freezer:/docker/704f6b395a22fbb13f949184fbb0ce1e31c48274ab7f059a935d59f964400a9d/crio/crio-4bd39570c33271141ffba528d4e0556b3320c0687027d30701f15bb9ff4fe5af"
	I1025 09:12:40.417284  110643 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/704f6b395a22fbb13f949184fbb0ce1e31c48274ab7f059a935d59f964400a9d/crio/crio-4bd39570c33271141ffba528d4e0556b3320c0687027d30701f15bb9ff4fe5af/freezer.state
	I1025 09:12:40.424798  110643 api_server.go:204] freezer state: "THAWED"
	I1025 09:12:40.424826  110643 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1025 09:12:40.440974  110643 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1025 09:12:40.441022  110643 status.go:463] multinode-357782 apiserver status = Running (err=<nil>)
	I1025 09:12:40.441051  110643 status.go:176] multinode-357782 status: &{Name:multinode-357782 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 09:12:40.441088  110643 status.go:174] checking status of multinode-357782-m02 ...
	I1025 09:12:40.441476  110643 cli_runner.go:164] Run: docker container inspect multinode-357782-m02 --format={{.State.Status}}
	I1025 09:12:40.478414  110643 status.go:371] multinode-357782-m02 host status = "Running" (err=<nil>)
	I1025 09:12:40.478449  110643 host.go:66] Checking if "multinode-357782-m02" exists ...
	I1025 09:12:40.478761  110643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-357782-m02
	I1025 09:12:40.507721  110643 host.go:66] Checking if "multinode-357782-m02" exists ...
	I1025 09:12:40.508071  110643 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:12:40.508109  110643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357782-m02
	I1025 09:12:40.529036  110643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21796-2312/.minikube/machines/multinode-357782-m02/id_rsa Username:docker}
	I1025 09:12:40.635795  110643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:12:40.649304  110643 status.go:176] multinode-357782-m02 status: &{Name:multinode-357782-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1025 09:12:40.649337  110643 status.go:174] checking status of multinode-357782-m03 ...
	I1025 09:12:40.649666  110643 cli_runner.go:164] Run: docker container inspect multinode-357782-m03 --format={{.State.Status}}
	I1025 09:12:40.667043  110643 status.go:371] multinode-357782-m03 host status = "Stopped" (err=<nil>)
	I1025 09:12:40.667065  110643 status.go:384] host is not running, skipping remaining checks
	I1025 09:12:40.667071  110643 status.go:176] multinode-357782-m03 status: &{Name:multinode-357782-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.47s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-357782 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-357782 node start m03 -v=5 --alsologtostderr: (7.451787908s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-357782 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.24s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (77.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-357782
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-357782
E1025 09:12:57.143052    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-357782: (25.035484342s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-357782 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-357782 --wait=true -v=5 --alsologtostderr: (52.092980971s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-357782
--- PASS: TestMultiNode/serial/RestartKeepsNodes (77.27s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-357782 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-357782 node delete m03: (4.934098012s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-357782 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.63s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-357782 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-357782 stop: (23.7940106s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-357782 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-357782 status: exit status 7 (103.369438ms)

                                                
                                                
-- stdout --
	multinode-357782
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-357782-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-357782 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-357782 status --alsologtostderr: exit status 7 (86.797228ms)

                                                
                                                
-- stdout --
	multinode-357782
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-357782-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:14:35.756337  118409 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:14:35.756449  118409 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:14:35.756460  118409 out.go:374] Setting ErrFile to fd 2...
	I1025 09:14:35.756464  118409 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:14:35.756724  118409 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-2312/.minikube/bin
	I1025 09:14:35.756905  118409 out.go:368] Setting JSON to false
	I1025 09:14:35.756945  118409 mustload.go:65] Loading cluster: multinode-357782
	I1025 09:14:35.757018  118409 notify.go:220] Checking for updates...
	I1025 09:14:35.758169  118409 config.go:182] Loaded profile config "multinode-357782": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:14:35.758193  118409 status.go:174] checking status of multinode-357782 ...
	I1025 09:14:35.758829  118409 cli_runner.go:164] Run: docker container inspect multinode-357782 --format={{.State.Status}}
	I1025 09:14:35.780969  118409 status.go:371] multinode-357782 host status = "Stopped" (err=<nil>)
	I1025 09:14:35.780993  118409 status.go:384] host is not running, skipping remaining checks
	I1025 09:14:35.781000  118409 status.go:176] multinode-357782 status: &{Name:multinode-357782 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 09:14:35.781031  118409 status.go:174] checking status of multinode-357782-m02 ...
	I1025 09:14:35.781327  118409 cli_runner.go:164] Run: docker container inspect multinode-357782-m02 --format={{.State.Status}}
	I1025 09:14:35.796146  118409 status.go:371] multinode-357782-m02 host status = "Stopped" (err=<nil>)
	I1025 09:14:35.796170  118409 status.go:384] host is not running, skipping remaining checks
	I1025 09:14:35.796184  118409 status.go:176] multinode-357782-m02 status: &{Name:multinode-357782-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.98s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (50.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-357782 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1025 09:15:21.121047    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/functional-562171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-357782 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (50.243803488s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-357782 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (50.93s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-357782
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-357782-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-357782-m02 --driver=docker  --container-runtime=crio: exit status 14 (97.585113ms)

                                                
                                                
-- stdout --
	* [multinode-357782-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21796
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21796-2312/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-2312/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-357782-m02' is duplicated with machine name 'multinode-357782-m02' in profile 'multinode-357782'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-357782-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-357782-m03 --driver=docker  --container-runtime=crio: (32.846125965s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-357782
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-357782: exit status 80 (520.217722ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-357782 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-357782-m03 already exists in multinode-357782-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-357782-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-357782-m03: (2.239563758s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.76s)

                                                
                                    
x
+
TestPreload (127.12s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-930715 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-930715 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (1m1.568292482s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-930715 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-930715 image pull gcr.io/k8s-minikube/busybox: (2.314530054s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-930715
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-930715: (5.925688754s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-930715 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1025 09:17:57.140458    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-930715 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (54.555951411s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-930715 image list
helpers_test.go:175: Cleaning up "test-preload-930715" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-930715
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-930715: (2.514435378s)
--- PASS: TestPreload (127.12s)

                                                
                                    
x
+
TestScheduledStopUnix (108.17s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-032538 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-032538 --memory=3072 --driver=docker  --container-runtime=crio: (30.915701162s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-032538 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-032538 -n scheduled-stop-032538
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-032538 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1025 09:18:45.548189    4110 retry.go:31] will retry after 117.663µs: open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/scheduled-stop-032538/pid: no such file or directory
I1025 09:18:45.548844    4110 retry.go:31] will retry after 143.064µs: open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/scheduled-stop-032538/pid: no such file or directory
I1025 09:18:45.550035    4110 retry.go:31] will retry after 203.292µs: open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/scheduled-stop-032538/pid: no such file or directory
I1025 09:18:45.551314    4110 retry.go:31] will retry after 487.824µs: open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/scheduled-stop-032538/pid: no such file or directory
I1025 09:18:45.552478    4110 retry.go:31] will retry after 299.843µs: open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/scheduled-stop-032538/pid: no such file or directory
I1025 09:18:45.553801    4110 retry.go:31] will retry after 436.317µs: open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/scheduled-stop-032538/pid: no such file or directory
I1025 09:18:45.555133    4110 retry.go:31] will retry after 814.827µs: open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/scheduled-stop-032538/pid: no such file or directory
I1025 09:18:45.556267    4110 retry.go:31] will retry after 1.455508ms: open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/scheduled-stop-032538/pid: no such file or directory
I1025 09:18:45.558661    4110 retry.go:31] will retry after 1.63699ms: open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/scheduled-stop-032538/pid: no such file or directory
I1025 09:18:45.560877    4110 retry.go:31] will retry after 3.728043ms: open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/scheduled-stop-032538/pid: no such file or directory
I1025 09:18:45.570263    4110 retry.go:31] will retry after 3.542614ms: open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/scheduled-stop-032538/pid: no such file or directory
I1025 09:18:45.574637    4110 retry.go:31] will retry after 10.697311ms: open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/scheduled-stop-032538/pid: no such file or directory
I1025 09:18:45.585496    4110 retry.go:31] will retry after 14.698887ms: open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/scheduled-stop-032538/pid: no such file or directory
I1025 09:18:45.600719    4110 retry.go:31] will retry after 22.417473ms: open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/scheduled-stop-032538/pid: no such file or directory
I1025 09:18:45.624005    4110 retry.go:31] will retry after 20.622526ms: open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/scheduled-stop-032538/pid: no such file or directory
I1025 09:18:45.644817    4110 retry.go:31] will retry after 58.461639ms: open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/scheduled-stop-032538/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-032538 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-032538 -n scheduled-stop-032538
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-032538
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-032538 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-032538
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-032538: exit status 7 (65.348859ms)

                                                
                                                
-- stdout --
	scheduled-stop-032538
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-032538 -n scheduled-stop-032538
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-032538 -n scheduled-stop-032538: exit status 7 (74.409861ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-032538" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-032538
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-032538: (5.398204077s)
--- PASS: TestScheduledStopUnix (108.17s)

                                                
                                    
x
+
TestInsufficientStorage (13.18s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-244206 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-244206 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.55126822s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e5946217-34cb-4de5-b944-94e6c15d1319","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-244206] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5f0fcd65-383a-47d3-b435-d852d1b3627e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21796"}}
	{"specversion":"1.0","id":"76db367c-de91-403b-9a25-95194f86f1f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"252accdb-8c3c-4032-91d7-6afcdd74b5f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21796-2312/kubeconfig"}}
	{"specversion":"1.0","id":"034c1777-2d93-4a6d-8d02-308768dfbeb3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-2312/.minikube"}}
	{"specversion":"1.0","id":"4affefbb-6fac-4c2f-9377-ebc705c7a349","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"4c127cc9-e98d-4493-befe-3c49779a27f3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"75333d1d-fb79-44c7-b252-d44f87f6f75a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"4de79d10-f9ba-46e8-9557-150a34cdca67","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"c26517c7-d710-441f-a00e-1f14f6f3a066","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"b361d313-82df-483f-a65c-30f233894cc1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"21c54582-1856-4ef1-a292-46c438be50c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-244206\" primary control-plane node in \"insufficient-storage-244206\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"0fd1e5c0-5789-46f6-bded-d6c2ab512aa2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1760939008-21773 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"b659f19c-4b2d-4441-a7b3-7217d434e561","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"3d9ae7e7-798d-4d08-b832-7e0b0bb375c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-244206 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-244206 --output=json --layout=cluster: exit status 7 (312.070676ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-244206","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-244206","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 09:20:12.932749  134795 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-244206" does not appear in /home/jenkins/minikube-integration/21796-2312/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-244206 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-244206 --output=json --layout=cluster: exit status 7 (307.26796ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-244206","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-244206","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 09:20:13.237884  134861 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-244206" does not appear in /home/jenkins/minikube-integration/21796-2312/kubeconfig
	E1025 09:20:13.247854  134861 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/insufficient-storage-244206/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-244206" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-244206
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-244206: (2.011663614s)
--- PASS: TestInsufficientStorage (13.18s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (54.04s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.3365295257 start -p running-upgrade-826823 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.3365295257 start -p running-upgrade-826823 --memory=3072 --vm-driver=docker  --container-runtime=crio: (32.523147307s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-826823 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-826823 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (18.72783282s)
helpers_test.go:175: Cleaning up "running-upgrade-826823" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-826823
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-826823: (2.048066867s)
--- PASS: TestRunningBinaryUpgrade (54.04s)

                                                
                                    
x
+
TestKubernetesUpgrade (368.16s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-707917 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-707917 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (47.011396252s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-707917
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-707917: (1.534463976s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-707917 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-707917 status --format={{.Host}}: exit status 7 (123.484853ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-707917 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1025 09:22:57.140267    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-707917 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m41.102569509s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-707917 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-707917 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-707917 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (101.280865ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-707917] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21796
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21796-2312/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-2312/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-707917
	    minikube start -p kubernetes-upgrade-707917 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7079172 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-707917 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-707917 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1025 09:27:40.210867    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-707917 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (35.678251365s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-707917" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-707917
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-707917: (2.498349158s)
--- PASS: TestKubernetesUpgrade (368.16s)

                                                
                                    
x
+
TestMissingContainerUpgrade (114.02s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.654281890 start -p missing-upgrade-334875 --memory=3072 --driver=docker  --container-runtime=crio
E1025 09:20:21.121639    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/functional-562171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.654281890 start -p missing-upgrade-334875 --memory=3072 --driver=docker  --container-runtime=crio: (1m1.573721573s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-334875
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-334875
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-334875 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-334875 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (47.485850235s)
helpers_test.go:175: Cleaning up "missing-upgrade-334875" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-334875
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-334875: (3.502796838s)
--- PASS: TestMissingContainerUpgrade (114.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-693294 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-693294 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (102.715482ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-693294] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21796
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21796-2312/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-2312/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (43.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-693294 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-693294 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (42.550739229s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-693294 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (43.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (38.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-693294 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-693294 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (36.46115429s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-693294 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-693294 status -o json: exit status 2 (317.359078ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-693294","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-693294
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-693294: (2.093611066s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (38.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-693294 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-693294 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (10.652232286s)
--- PASS: TestNoKubernetes/serial/Start (10.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-693294 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-693294 "sudo systemctl is-active --quiet service kubelet": exit status 1 (346.087088ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-693294
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-693294: (1.440170015s)
--- PASS: TestNoKubernetes/serial/Stop (1.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (9.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-693294 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-693294 --driver=docker  --container-runtime=crio: (9.47397839s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (9.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-693294 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-693294 "sudo systemctl is-active --quiet service kubelet": exit status 1 (372.233822ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.37s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.64s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.64s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (66.6s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.4250860208 start -p stopped-upgrade-971794 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.4250860208 start -p stopped-upgrade-971794 --memory=3072 --vm-driver=docker  --container-runtime=crio: (39.355401953s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.4250860208 -p stopped-upgrade-971794 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.4250860208 -p stopped-upgrade-971794 stop: (1.322964835s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-971794 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-971794 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (25.92242291s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (66.60s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.21s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-971794
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-971794: (1.208979694s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.21s)

                                                
                                    
x
+
TestPause/serial/Start (84.27s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-993166 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E1025 09:25:21.121012    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/functional-562171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-993166 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m24.268379691s)
--- PASS: TestPause/serial/Start (84.27s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (123.09s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-993166 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-993166 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (2m3.069294622s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (123.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-068349 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-068349 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (241.338846ms)

                                                
                                                
-- stdout --
	* [false-068349] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21796
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21796-2312/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-2312/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:28:15.671211  173764 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:28:15.671430  173764 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:28:15.671459  173764 out.go:374] Setting ErrFile to fd 2...
	I1025 09:28:15.671479  173764 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:28:15.671784  173764 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21796-2312/.minikube/bin
	I1025 09:28:15.672253  173764 out.go:368] Setting JSON to false
	I1025 09:28:15.673276  173764 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4247,"bootTime":1761380249,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1025 09:28:15.673373  173764 start.go:141] virtualization:  
	I1025 09:28:15.677371  173764 out.go:179] * [false-068349] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 09:28:15.681278  173764 out.go:179]   - MINIKUBE_LOCATION=21796
	I1025 09:28:15.681511  173764 notify.go:220] Checking for updates...
	I1025 09:28:15.687215  173764 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:28:15.690155  173764 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21796-2312/kubeconfig
	I1025 09:28:15.693184  173764 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21796-2312/.minikube
	I1025 09:28:15.696194  173764 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 09:28:15.699018  173764 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:28:15.702549  173764 config.go:182] Loaded profile config "force-systemd-flag-100847": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:28:15.702663  173764 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:28:15.743210  173764 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 09:28:15.743323  173764 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:28:15.836979  173764 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-25 09:28:15.82687307 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:28:15.837080  173764 docker.go:318] overlay module found
	I1025 09:28:15.840031  173764 out.go:179] * Using the docker driver based on user configuration
	I1025 09:28:15.842966  173764 start.go:305] selected driver: docker
	I1025 09:28:15.842986  173764 start.go:925] validating driver "docker" against <nil>
	I1025 09:28:15.842999  173764 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:28:15.846509  173764 out.go:203] 
	W1025 09:28:15.849344  173764 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1025 09:28:15.852062  173764 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-068349 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-068349

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-068349

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-068349

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-068349

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-068349

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-068349

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-068349

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-068349

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-068349

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-068349

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-068349"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-068349"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-068349"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-068349

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-068349"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-068349"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-068349" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-068349" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-068349" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-068349" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-068349" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-068349" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-068349" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-068349" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-068349"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-068349"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-068349"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-068349"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-068349"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-068349" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-068349" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-068349" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-068349"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-068349"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-068349"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-068349"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-068349"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-068349

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-068349"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-068349"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-068349"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-068349"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-068349"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-068349"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-068349"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-068349"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-068349"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-068349"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-068349"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-068349"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-068349"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-068349"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-068349"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-068349"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-068349"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-068349"

                                                
                                                
----------------------- debugLogs end: false-068349 [took: 4.264544168s] --------------------------------
helpers_test.go:175: Cleaning up "false-068349" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-068349
--- PASS: TestNetworkPlugins/group/false (4.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (58.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-881642 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E1025 09:30:21.121435    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/functional-562171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-881642 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (58.60532904s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (58.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-881642 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [e0d22b63-119d-4a6a-aa7a-2f343c65f609] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [e0d22b63-119d-4a6a-aa7a-2f343c65f609] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.009057502s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-881642 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-881642 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-881642 --alsologtostderr -v=3: (12.018349827s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-881642 -n old-k8s-version-881642
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-881642 -n old-k8s-version-881642: exit status 7 (78.697297ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-881642 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (46.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-881642 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-881642 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (46.295193933s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-881642 -n old-k8s-version-881642
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (46.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-pvtmx" [72a9c952-6f92-4ca8-8bcb-dee91a24fd0c] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003618339s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-pvtmx" [72a9c952-6f92-4ca8-8bcb-dee91a24fd0c] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003178127s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-881642 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-881642 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (78.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-179869 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-179869 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m18.90809474s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (78.91s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (90.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-173264 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1025 09:32:57.139562    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-173264 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m30.979963047s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (90.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-179869 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [c2e838fa-d7c8-4aaa-822c-b07461356def] Pending
helpers_test.go:352: "busybox" [c2e838fa-d7c8-4aaa-822c-b07461356def] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [c2e838fa-d7c8-4aaa-822c-b07461356def] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.008289441s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-179869 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-179869 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-179869 --alsologtostderr -v=3: (12.020476291s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-179869 -n no-preload-179869
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-179869 -n no-preload-179869: exit status 7 (69.509562ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-179869 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (49.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-179869 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-179869 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (48.606472971s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-179869 -n no-preload-179869
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (49.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-173264 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [bc9e47c5-4d6f-402b-b0c1-7ebe8a846159] Pending
helpers_test.go:352: "busybox" [bc9e47c5-4d6f-402b-b0c1-7ebe8a846159] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [bc9e47c5-4d6f-402b-b0c1-7ebe8a846159] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.005338956s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-173264 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-173264 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-173264 --alsologtostderr -v=3: (12.144235742s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-173264 -n embed-certs-173264
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-173264 -n embed-certs-173264: exit status 7 (72.992239ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-173264 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (49.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-173264 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-173264 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (48.97029439s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-173264 -n embed-certs-173264
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (49.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-mfm5d" [a50d6b41-01f1-46ca-bcfd-0d1aefe83b4a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003503244s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-mfm5d" [a50d6b41-01f1-46ca-bcfd-0d1aefe83b4a] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003199795s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-179869 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-179869 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (84.7s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-666079 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1025 09:35:21.121400    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/functional-562171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-666079 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m24.700448786s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (84.70s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-sj8dq" [25fa6987-f91d-42a5-8ef0-848aca718f8c] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004835461s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-sj8dq" [25fa6987-f91d-42a5-8ef0-848aca718f8c] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003724884s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-173264 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-173264 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (38.78s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-052144 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1025 09:35:51.321010    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/old-k8s-version-881642/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:35:51.327455    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/old-k8s-version-881642/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:35:51.338853    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/old-k8s-version-881642/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:35:51.360238    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/old-k8s-version-881642/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:35:51.401630    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/old-k8s-version-881642/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:35:51.483481    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/old-k8s-version-881642/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:35:51.645607    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/old-k8s-version-881642/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:35:51.967216    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/old-k8s-version-881642/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:35:52.609154    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/old-k8s-version-881642/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:35:53.890486    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/old-k8s-version-881642/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:35:56.452226    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/old-k8s-version-881642/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:36:01.574255    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/old-k8s-version-881642/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:36:11.817281    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/old-k8s-version-881642/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-052144 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (38.783977227s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (38.78s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-052144 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-052144 --alsologtostderr -v=3: (1.358421288s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-052144 -n newest-cni-052144
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-052144 -n newest-cni-052144: exit status 7 (74.752454ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-052144 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (15.83s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-052144 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1025 09:36:32.299387    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/old-k8s-version-881642/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-052144 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (15.412577986s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-052144 -n newest-cni-052144
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (15.83s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-666079 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [8bca5827-d45d-434b-b53a-3f6ea93124bb] Pending
helpers_test.go:352: "busybox" [8bca5827-d45d-434b-b53a-3f6ea93124bb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [8bca5827-d45d-434b-b53a-3f6ea93124bb] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004486205s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-666079 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-052144 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-666079 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-666079 --alsologtostderr -v=3: (12.183618513s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (88.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-068349 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-068349 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m28.239284111s)
--- PASS: TestNetworkPlugins/group/auto/Start (88.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-666079 -n default-k8s-diff-port-666079
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-666079 -n default-k8s-diff-port-666079: exit status 7 (92.85397ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-666079 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (55.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-666079 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1025 09:37:13.261589    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/old-k8s-version-881642/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:37:57.139579    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-666079 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (55.053868222s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-666079 -n default-k8s-diff-port-666079
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (55.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-v6j8w" [8d8e464a-fad4-4966-91eb-5d8b916d9ed7] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003263765s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-v6j8w" [8d8e464a-fad4-4966-91eb-5d8b916d9ed7] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003902229s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-666079 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-666079 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (92.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-068349 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-068349 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m32.034534876s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (92.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-068349 "pgrep -a kubelet"
I1025 09:38:25.296300    4110 config.go:182] Loaded profile config "auto-068349": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-068349 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-vlzfw" [3792617c-7805-445f-9d62-f608b5d7f17d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-vlzfw" [3792617c-7805-445f-9d62-f608b5d7f17d] Running
E1025 09:38:35.183492    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/old-k8s-version-881642/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.00509351s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-068349 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-068349 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-068349 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (66.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-068349 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E1025 09:39:03.358177    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/no-preload-179869/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:39:23.839821    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/no-preload-179869/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-068349 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m6.252595447s)
--- PASS: TestNetworkPlugins/group/calico/Start (66.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-fcf9q" [b1a905dc-e08e-4df3-b831-8d5850448f61] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.007382798s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-068349 "pgrep -a kubelet"
I1025 09:40:01.011704    4110 config.go:182] Loaded profile config "kindnet-068349": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-068349 replace --force -f testdata/netcat-deployment.yaml
I1025 09:40:01.394980    4110 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-bpgmp" [1b45253d-93c6-4ec0-8aad-dd6cdb03bb9a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1025 09:40:04.801350    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/no-preload-179869/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-bpgmp" [1b45253d-93c6-4ec0-8aad-dd6cdb03bb9a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.003096873s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-28czm" [ab1be1cd-c765-4d3d-9efc-47b8333f2e93] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004274734s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-068349 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-068349 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-068349 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-068349 "pgrep -a kubelet"
I1025 09:40:15.395177    4110 config.go:182] Loaded profile config "calico-068349": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-068349 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-9xkmq" [86c787a0-5fe3-42a0-a968-e9409586a208] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-9xkmq" [86c787a0-5fe3-42a0-a968-e9409586a208] Running
E1025 09:40:21.120957    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/functional-562171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.004628152s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-068349 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-068349 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-068349 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (69.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-068349 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-068349 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m9.771671753s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (69.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (81.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-068349 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E1025 09:41:19.025546    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/old-k8s-version-881642/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:41:26.723233    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/no-preload-179869/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:41:41.674687    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:41:41.681192    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:41:41.692700    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:41:41.714216    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:41:41.755735    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:41:41.837362    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:41:41.998942    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:41:42.323791    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:41:42.965896    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:41:44.248109    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:41:46.810203    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-068349 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m21.952758034s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (81.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-068349 "pgrep -a kubelet"
I1025 09:41:47.519955    4110 config.go:182] Loaded profile config "custom-flannel-068349": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-068349 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-9q65f" [2256f71b-f047-41fd-be0f-79461a61f448] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1025 09:41:51.931828    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-9q65f" [2256f71b-f047-41fd-be0f-79461a61f448] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.00453618s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-068349 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-068349 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-068349 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-068349 "pgrep -a kubelet"
I1025 09:42:14.349543    4110 config.go:182] Loaded profile config "enable-default-cni-068349": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-068349 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-nx27z" [c9efd68f-736b-47ac-b8c6-927c0afebfdb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-nx27z" [c9efd68f-736b-47ac-b8c6-927c0afebfdb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.003992273s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (70.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-068349 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1025 09:42:22.655706    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-068349 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m10.943678897s)
--- PASS: TestNetworkPlugins/group/flannel/Start (70.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-068349 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-068349 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-068349 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (76.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-068349 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1025 09:42:57.139575    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/addons-468341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:43:03.616938    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/default-k8s-diff-port-666079/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:43:25.657701    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:43:25.663991    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:43:25.675351    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:43:25.696766    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:43:25.738098    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:43:25.819445    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:43:25.980928    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:43:26.302681    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:43:26.943961    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:43:28.225685    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:43:30.787660    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-068349 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m16.072641933s)
--- PASS: TestNetworkPlugins/group/bridge/Start (76.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-27pb5" [406584ac-81db-4c21-b920-04aa55f7ce73] Running
E1025 09:43:35.909669    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003524671s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-068349 "pgrep -a kubelet"
I1025 09:43:38.679369    4110 config.go:182] Loaded profile config "flannel-068349": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-068349 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-9wz68" [5b8cb240-4423-47e9-b873-fa98c7b7640b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1025 09:43:42.859187    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/no-preload-179869/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-9wz68" [5b8cb240-4423-47e9-b873-fa98c7b7640b] Running
E1025 09:43:46.151683    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/auto-068349/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004120069s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-068349 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-068349 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-068349 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-068349 "pgrep -a kubelet"
I1025 09:44:08.218275    4110 config.go:182] Loaded profile config "bridge-068349": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-068349 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-xk8nc" [c61d4918-ff0c-4afb-8610-ae7f94f7b701] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-xk8nc" [c61d4918-ff0c-4afb-8610-ae7f94f7b701] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.0042444s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-068349 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-068349 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-068349 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (30/326)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.46s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-812739 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-812739" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-812739
--- SKIP: TestDownloadOnlyKic (0.46s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-901717" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-901717
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-068349 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-068349

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-068349

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-068349

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-068349

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-068349

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-068349

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-068349

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-068349

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-068349

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-068349

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-068349"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-068349"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-068349"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-068349

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-068349"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-068349"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-068349" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-068349" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-068349" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-068349" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-068349" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-068349" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-068349" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-068349" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-068349"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-068349"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-068349"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-068349"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-068349"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-068349" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-068349" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-068349" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-068349"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-068349"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-068349"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-068349"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-068349"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-068349

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-068349"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-068349"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-068349"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-068349"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-068349"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-068349"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-068349"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-068349"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-068349"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-068349"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-068349"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-068349"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-068349"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-068349"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-068349"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-068349"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-068349"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-068349"

                                                
                                                
----------------------- debugLogs end: kubenet-068349 [took: 4.339698745s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-068349" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-068349
--- SKIP: TestNetworkPlugins/group/kubenet (4.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
E1025 09:28:24.186652    4110 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21796-2312/.minikube/profiles/functional-562171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
panic.go:636: 
----------------------- debugLogs start: cilium-068349 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-068349

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-068349

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-068349

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-068349

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-068349

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-068349

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-068349

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-068349

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-068349

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-068349

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-068349"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-068349"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-068349"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-068349

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-068349"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-068349"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-068349" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-068349" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-068349" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-068349" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-068349" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-068349" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-068349" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-068349" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-068349"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-068349"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-068349"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-068349"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-068349"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-068349

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-068349

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-068349" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-068349" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-068349

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-068349

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-068349" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-068349" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-068349" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-068349" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-068349" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-068349"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-068349"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-068349"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-068349"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-068349"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-068349

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-068349"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-068349"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-068349"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-068349"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-068349"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-068349"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-068349"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-068349"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-068349"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-068349"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-068349"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-068349"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-068349"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-068349"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-068349"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-068349"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-068349"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-068349" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-068349"

                                                
                                                
----------------------- debugLogs end: cilium-068349 [took: 6.192188523s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-068349" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-068349
--- SKIP: TestNetworkPlugins/group/cilium (6.44s)

                                                
                                    
Copied to clipboard